How to build a world-class AI customer service team
Templates and guidance on building a customer service team that uses both AI and human agents to their fullest potential.
Learn MoreLate last year, hackers breached identity and access management company Okta and stole sensitive information about nearly 200 of its customers. It was a devastating breach for the company, which is in the business of keeping data private and secure. And it all started in the customer service department.
Okta was just one company attacked through its customer support system in 2023, with others like Discord and MailChimp suffering the same fate. Customer service is an integral part of every company, but it’s also a keeper of vast amounts of sensitive information. Now as AI enters the arena, customer service is about to become even more data-rich. This presents a lot of benefits for businesses , but also new concerns for customers.
Nearly 82% of consumers surveyed said they are somewhat or very concerned about how the use of AI customer service could potentially compromise their online privacy.
- CDP
So how safe is customer data in a world run by AI agents? And what do companies need to do to protect it?
There was a time when consumers didn’t know much about data and how it’s used, but that time has passed. Publishers Clearing House recently surveyed over 45,000 American consumers over age 25, perhaps the largest ever survey undertaken around data ethics and privacy, and found that 86% of Americans are more concerned about their own data privacy and security than the state of the U.S. economy.
When asked how and when they'd be willing to share their data, 38% of consumers said they wouldn't ever want to share their personal data.
Only 51% said they feel informed about how their personal data is being used.
- Data Ethics Design, PCH
What’s more, consumers are increasingly taking action to protect their data. When Apple gave users the option to opt out of app tracking, for example, an overwhelming 96% chose to opt out . 2022 also saw a 72% increase in the total volume of Data Subject Requests (DSRs) — formal requests made by individuals to companies to modify, access, or delete their data — compared to 2021, with deletion requests as a primary driver for the increase, according to Datagrail . Many consumers even pay for personal data management services like DeleteMe and Incogni to monitor and wipe their personal data from the internet.
“Consumers are becoming more savvy and aware of how companies use data, and therefore have more questions and are exercising their rights, such as opt-out, delete, or access,” said Jodi Daniels, founder and CEO of data privacy advising firm Red Clover Advisors, in response to questions for this article.
Daniels has seen first-hand the increase in consumer awareness about how companies use (or misuse) consumer data, which she attributes to media attention, legislation pushes, and the barrage of cyberattacks that are exposing sensitive data on a regular basis. Indeed, high-profile instances of the misuse of consumer data at firms like Meta have drawn a lot of attention to the subject.
In the U.S. in particular, the slow drip of state laws geared at protecting consumer data privacy and overall patchwork approach has only highlighted the lack of comprehensive national data protections. Yet while consumer data protection has moved at a snail’s pace, AI is charging ahead at lightning speed.
All parts of a business need to be concerned with data privacy and security, but due to the direct nature of its interactions with customers, customer service has a special responsibility.
“Customer service departments make attractive targets,” according to Security Intelligence . “Depending on the business, a customer service agent may have access to a trove of customer information and company systems. They may even have access to change customer account information or take payments over the phone.”
That was true in the hands of human agents, and it goes double for AI agents. Machine learning and AI systems — in general and in customer service — involve massive amounts of data. Turbocharging customer service with AI can lead to more personalized, immersive customer service and even help customers resolve their problems faster and more easily, but it also presents new concerns.
AI already has a bleak reputation when it comes to ethical data practices, with many of the top AI technologies on the market having been built on data that was scrapped freely from the internet without concern for who owned it — and if they consented to it being used. One of the main risks and consumer concerns regarding data privacy is that data will be repurposed, or used beyond the use for which it was originally collected and intended.
The widespread data scraping to build AI models, considered by some to be “the original sin” of AI , sent the signal that all data is up for grabs when it comes to AI.
"AI advancements are exacerbating the data situation, causing more mistrust among consumers, especially as there continues to be a gap in data knowledge and understanding."
- Data Ethics Design, PCH
Even before AI, data repurposing was a tenant concern of data privacy. Now companies’ desire to utilize large language models (LLMs) has created a massive need for data and a new reason to repurpose it. If this sounds ethically wrong, many would agree. But consumers know it’s as easy as burying the company’s right to do so in jargon-filled terms and conditions documents.
“When personal data is entered into an AI model, the concern is that the data can be used to train other data sets for other companies and its use isn’t limited,” said Daniels.
Last summer, for example, Zoom users noticed the company updated its Terms of Service to seemingly permit both Zoom and third-party companies to use their calls to train AI models. Users loudly voiced their concerns and pledged to cancel their accounts, sending a clear signal about consumer feelings around the repurposing of data for AI training. Zoom later said it would not use customer audio, video, or chat content for AI training, but it was a whirlwind of fury and betrayal.
Beyond repurposing, privacy risks associated with AI also include data spilling, which refers to when sensitive data is either negligently or maliciously moved into an environment where it’s not authorized to be, and where it may be viewed by unauthorized eyes. There’s also concern around how long companies hold onto such data, which makes it more susceptible to being stolen in a breach.
Another final major concern is reidentification and deanonymization, wherein individuals can be specifically identified from supposedly anonymized data points. In November, Google researchers published a paper detailing how they were able to prompt ChatGPT to reveal the personally identifiable information (PII) of real people that was included in its training data, exhibiting the exact kind of risks around deanonymization that are concerning privacy advocates and consumers alike.
All of this considered, companies that adopt AI for customer service need to center data privacy in every aspect. Failing to do so could cost them customers’ trust — and business.
58% of consumers said they're willing to stop interacting with a company that has a bad reputation around data.
- Data Ethics Design, PCH
“Earning new customers and maintaining existing customers is so expensive and challenging for companies,” said Daniels. “Losing them over haste and lack of privacy could cause significant financial losses to a company.”
One of the reasons the Zoom incident touched such a nerve with customers was that it severely undermined their trust in the company. Not only were Zoom’s purported rights to their data wide-reaching and an invasion of privacy, but the quiet burying of the terms in dense legal documents made it feel even more exploitative.
The PCH report recommends companies write their terms and conditions statements to be easily understood and read by the average person in less than three minutes, even framing this type of transparency as a competitive differentiator. “Trust sells,” it says.
In order to sell trust, however, companies need to fully deliver it. That means intentionally building privacy and security into every level of AI systems and processes, regularly reevaluating those practices with Data Protection Impact Assessments (DPIA), and presenting this all to customers with full transparency.
For example, the AI tool ProWritingAid lays out its data privacy practices in illustrated flowcharts and simple language on a dedicated Trust Center page , stating “We only use your text to help you improve your writing. Nothing else. Period” and that its system doesn’t even retain the text inputted by users. The company also makes clear that its practices and claims are routinely evaluated by third parties.
Ada’s own Security page similarly lays out its data practices with granular specifics, including about the location and security of its servers, its data flow, application security, corporate security, and privacy compliance. Many companies are additionally hiring firms like Ethyca, which provide privacy engineering services, to help roll out their products and practices with privacy built in by design.
Overall, as a good rule of thumb for implementing data privacy, companies should limit their data collection to only the minimum data needed, get it directly from customers, and do so with their informed consent of exactly how it will be used and why.
It’s imperative for companies to do deep due diligence on vendors’ data practices to ensure they’re following these principles, asking questions like: does the tool provide a secure way to contact customer service? Can it automatically redact unnecessary sensitive information from customer inputs? How much access to other data in my tech stack will the tool need to work?
This can feel like a lot of in-the-weeds considerations, but it all comes down to one simple practice. Companies should have respect — on a human-level — for their customers and their personal information. It’s been true all along, but it’s ever more important in the age of AI.
“We are in the era of privacy where it matters to customers and more laws are coming into play for both privacy and AI to create guardrails for companies,” Daniels said. “Treat data respectfully and customers will reward the companies that do.”
Learn what to look for and the right questions to ask.
Get the toolkit