Voice has long been the most complex channel in customer service.
It operates in real time. It carries emotional weight. It unfolds in unpredictable environments. And for many enterprises, it has remained disconnected from broader automation strategies, reliant on IVR systems that route efficiently but rarely resolve fully.
That’s changing.
In our recent webinar, "On the line: How enterprises make AI voice agents work at scale," CX leaders from Ancestry and Branch discussed what it truly takes to deploy AI voice agents inside high-volume, high-stakes environments. Their focus extended beyond voice alone to building a stronger omnichannel customer experience grounded in operational discipline.
What followed was a candid discussion about what works, what doesn’t, and what scaling voice actually requires. Here’s what they learned about making AI voice agents for customer service work at scale.
1. Voice works best when it’s built on unified intelligence
Both Branch and Ancestry approached voice with a clear objective: consistency.
Rather than building voice on separate logic or standalone workflows, they powered it through the same unified Reasoning Engine™ that governs knowledge retrieval, workflow execution, and system integrations across their AI customer service.
That architectural decision directly affects how voice performs under pressure.
AI voice agents operate in real time. They verify users, retrieve account data, follow regulated workflows, and execute actions within a single interaction. When voice runs on fragmented logic, inconsistencies emerge quickly, especially in high-volume environments.
When voice is powered by unified reasoning, the same “brain” determines and executes the best next action across interactions. Policies stay aligned. Integrations remain centralized. Governance stays manageable.
The takeaway: Voice reaches its full potential when powered by the same reasoning foundation that drives your broader AI strategy.
2. Voice exposes what your systems can’t hide
Messaging can conceal operational gaps, but voice can’t. When customers call, they are often distracted, emotional, or in motion.
Branch supports drivers who call from the road, often with background noise or multiple conversations happening at once. Ancestry serves a multi-generational audience, where calls may involve slower speech, interruptions, or even three-way conversations with family members assisting in the background.
Voice forces your AI customer service agent to perform under real-world conditions.
Both teams described early surprises. Customers trained by IVR systems tend to start with one-word prompts: “billing,” “agent,” “account.” But conversational AI works differently. As Joe Wang, Director of CX at Ancestry noted during the session, the more context customers provide, the better the system performs.
That behavioral shift matters. Modern AI voice agents for customer service must handle:
- Background noise and cross-talk
- Stop-start speech and interruptions
- Multiple questions within a single turn
- Emotional nuance across different demographics
Traditional IVR systems rely on rigid decision trees. When speech deviates, friction increases. Conversational AI, by contrast, must determine the best next action dynamically, retrieving knowledge, executing workflows, accessing backend systems, or escalating with context when needed.
The takeaway: Voice surfaces complexity. The teams that succeed are the ones who design for that reality from the start.
3. Conversational AI without structure doesn’t scale
Generative AI can hold a conversation. That doesn’t mean it can execute a workflow.
Both teams were clear on this distinction. The real challenge begins when AI must follow regulated processes, collect specific information in sequence, validate inputs, and trigger backend actions.
For Branch, disputes and financial workflows require deterministic steps. Certain questions must be asked, certain data must be verified, and regulation leaves little room for improvisation.
For Ancestry, identity verification and policy-bound actions demand similar discipline. This is where structure becomes essential.
Both teams rely on Playbooks, structured, multi-step workflows that mirror the SOPs their human agents follow today. As Alex Mashinski, Director of Support Operations at Branch described during the session, it’s about giving the AI the same operational playbook human agents use, but designing it intentionally for conversational execution.
The balance is deliberate:
- Conversational intelligence handles nuance and variability.
- Structured execution ensures compliance and consistency.
The takeaway: Scaling AI voice agents isn’t about making conversations sound natural. It’s about making execution reliable.
Unlock more complex AI resolutions
Browse real examples of AI agents using plain-language instructions to automate multi-step processes with speed and precision.
Explore Playbooks Library4. Deflection is easy. Resolution is what earns trust.
Both leaders pushed back on the idea that success in voice is defined by deflection alone.
Avoiding escalation is visible and easy to report. But if the underlying issue isn’t resolved, the experience hasn’t improved. As Joe put it during the session, “customers don’t dislike AI. They dislike bad experiences.”
For both organizations, the focus has shifted from simple containment to deeper resolution:
- Passing full context when escalation is required
- Avoiding repetitive questioning
- Reducing handle time for human agents
- Ensuring continuity across channels
At Branch, AI customer service is increasingly positioned as tier one—handling repeatable, structured interactions—while human agents are elevated into higher-skill, nuanced cases.
At Ancestry, success isn’t just measured in deflection. It’s measured in experience quality, long-term engagement, and the broader business impact of resolved conversations.
The takeaway: Containment protects cost, but resolution builds trust. And trust is what scales.
5. AI voice agents require ownership, not oversight
Ancestry and Branch didn’t approach voice as a feature launch. They approached it as an operational evolution.
AI voice agents only scale when the underlying foundation is mature—when intelligence, integrations, governance, and performance management work together.
Both leaders were clear: AI is not something you “turn on” and monitor from a distance. It requires active ownership. As adoption expands, new responsibilities emerge:
- Rewriting knowledge so AI can interpret it correctly
- Building and maintaining API integrations
- Establishing QA processes specific to AI behavior
- Coaching AI responses to improve performance over time
- Treating failures as data, not as proof that the experiment failed
At Ancestry, this meant tighter collaboration between CX, engineering, and security teams.At Branch, it meant elevating roles around knowledge management and performance analysis.
Both teams described the same shift: AI is not a black box. It’s an operational muscle. Scaling AI voice agents for customer service is less about model capability and more about management capability.
The takeaway: Treat AI as a management discipline, not a tool, to see the most durable results.
Putting these lessons into practice
Ancestry and Branch treated voice as a natural progression of an AI capability that was already resolving conversations, executing workflows, and improving over time.
That progression reflects the agentic customer experience (ACX) Operating Model, a unified framework that brings together technology, methodology, and expertise to help enterprises build and manage high-performing AI agents at scale.
In practice, that operating foundation requires one core capability: AI customer service agents must be able to determine and execute the right next action in real time.
That means they need to:
- Retrieve relevant knowledge instantly and accurately
- Run structured Playbooks that follow policy and regulation
- Access backend systems through secure integrations
- Escalate to human agents with full conversational context
- Adapt mid-conversation when new information changes the path forward
When those capabilities work together, resolution becomes predictable, even in the variability of voice.
Voice doesn’t scale through conversation design alone. It scales when AI can act with precision and when performance is actively managed over time.
The real shift underway
Voice has always required more from customer service organizations. It demands accuracy under pressure, clarity in unpredictable conditions, and execution that holds up in real time.
Branch and Ancestry demonstrated that AI voice agents can meet that standard when they are supported by the right foundation. Resolution improves when conversational intelligence is paired with structured workflows, secure integrations, and disciplined performance management.
At the center of that capability is the unified Reasoning Engine™, Ada’s unified intelligence layer for AI customer service agents, purpose-built to orchestrate knowledge, workflows, and enterprise systems in real time.
On the line: How enterprises make voice AI agents work at scale
In part two of our webinar series, we explore how leading enterprises deploy AI-powered voice experiences that deliver measurable business impact at scale.
Watch on-demand