Ada Support

The importance of onboarding AI to CX organizations

Gordon Gibson
Director, Applied Machine Learning
AI & Automation | 6 min read

Video transcript

Hey everyone, I’m Gordon from Ada, and welcome to our second video in our series about the role of generative AI in customer service automation. If you need a refresher on generative AI, LLMs, ChatGPT, and what it all means for your business, check out this quick video first , then hop back in here.

At this point, ambitious CX leaders know that they need to incorporate generative AI into their strategy, but you might still be asking yourself, “how do I even get started?”

The best way to approach generative AI is with a human perspective: onboard the model like you would a new employee. The more thoroughly you prepare the AI, the better it will perform from the get go. There are 3 things you need to teach it how to do.

The first is teaching your AI how to read

Here's Mike talking about it:

Mike Murchison: "Teaching your AI to read means that you are making the knowledge and documents across your company accessible to you AI. You knowledge becomes the source of truth for the accuracy of your AI. And so it's critical that you identify all the documents across your organization — chat transcripts, knowledge base articles, conversations — and ensure that they are an accurate representation of your business and your customer service operations."

While the knowledge contained in LLMs like ChatGPT is impressive, you need to be careful about solely relying on them to represent your company because they typically don't have the most accurate or up to date information — you have this!

Additionally, training LLMs from scratch is generally very expensive. It can take a significant amount of time, effort, and compute cost. And while fine-tuning LLMs with your domain-specific data is more cost effective, it still takes time and may not be something you want to wait for every time you need to update your content.

It’s likely that a far more practical solution is to keep your knowledge bases as your source of truth, and simply have LLMs reference the content that’s there.

Combining LLMs with your company’s information gives you the best of both worlds: the fastest and cheapest way to generate content that is grounded in the most up-to-date reference material.

The second is teaching your AI how to perform actions

Here's Mike again.

Mike Murchison: "In order for your AI to automatically resolve customer inquiries, and to do so in a way that makes customer service experiences great for everyone, you need to teach your AI to perform actions. This means making all the APIs across your business accessible to your AI. Otherwise, your AI is only going to be able to provide basic answers, as opposed to performing complex actions like refunding a payment, updating an account, personalizing the experience so that your customers know that you know who they are, and know that you care about taking actions on their behalf."

Let's dig into the API strategy here a little.

It’s important to remember that right now, LLMs work by predicting the next word, and while they do that very well, they can’t actually take action on behalf of a customer. What they can do is know when to connect a customer to a third party software that can help them take that action. This is where your API strategy comes in.

Figuring out your company’s API strategy is a partnership between your Product, Customer Experience, and Technology departments. You should ask yourself “what actions do I want to expose to my customers?”, “what is the level of readiness of my APIs that power those actions” and “how can we surface them in a safe and controllable way?”.

You’ll want to work with AI solutions that allow you to manage the business logic behind how you power actions. For example, while you might want an LLM to be able to help facilitate refunds for your customers, you might not want it to proactively suggest a refund, or you may only want to support refunds for certain products.

The third is teaching your AI where to find expertise

Back to Mike.

Mike Murchison: "Teaching your AI how to access expertise across the organization means that you AI is trained on the institutional knowledge within the experts' brains within the organization. To do that, you need to make sure that you've:

  1. Identified all those experts
  2. Set up the routing technology to connect them in when needed
  3. Set up the tooling that allows them to offer their expertise to train your AI"

With AI automatically resolving customer inquiries and helping them take action, you now have the opportunity to route the truly unique inquiries to the people in your organization who are best equipped to handle them. To really optimize the experience for customers, you need your AI to help facilitate handoffs to the right people in your organization.

Beyond that, you should think about using AI tools that make it easy for the specialists in your team to share their expertise by helping train the AI. Some considerations might be products that offer SSO with different levels of access, pricing models that don’t have a per seat charge, and an intuitive interface that’s accessible to non-technical people.

Dive deeper into the lessons

We recently hosted a webinar with a guest speaker from Forrester diving deeper into these 3 lessons, and sharing some practical steps you can take to prepare your CX organization for generative AI. I highly recommend watching it.

If you have any questions, please feel free to reach out, we’d love to hear from you. And otherwise, stay tuned for more videos in this series.

Thanks for watching.

How to leverage generative AI for customer service automation

Unpack the implications of generative AI on the strategy, people and technology powering your customer service organization.

Watch the webinar