By now, the benefits of A/B testing are well understood: by helping you make more confident, data-backed decisions on how to optimize your conversational AI strategy, you can exponentially improve the key CX metrics that matter most to you.

But a lot of the time, despite your best efforts, A/B testing doesn’t yield the results that you expect. At best, test after test might only have a small impact on the metric that you’re trying to optimize. At worst, your test never reaches a statistically significant result—in other words, you don’t collect enough data to make an inference with absolute certainty. After a while, you might start to question if A/B testing is worth the effort that you’re putting into it.

callout2

In fact, according to a study done by VWO, only 1 in every 7 A/B tests is a statistically significant winning test.

Looking to understand how to set up an A/B testing agenda for your conversational experience that actually has a tangible impact on your business? Here are our top 4 tips to set you down the path to success.

Tip 1: Determine the right success metrics

Before you get started, you need to define appropriate success metrics for your experiments. We recommend taking the larger macro conversion that you are looking to drive—which is the ultimate action you want users to take—and breaking it into smaller micro conversions, which are smaller, valuable actions a user might complete on the path to a macro conversion.

For example, if you’re an ecommerce company with an objective of increasing revenue per website visitor, your macro conversion might be to increase completed checkouts per site visitor. This could then be broken down even further into micro conversions like: 

  • Increasing add to cart rate
  • Increasing email signups
  • Increasing page views for a specific product page 
  • Increasing visitors with >1 object in cart
  • Increasing product video views

Ideally, your A/B testing agenda will include experiments with both macro and micro conversions as success metrics. While successful A/B tests that optimize for a macro conversion can end up having a larger business impact, your likelihood of success is greater with an experiment that’s optimizing for a micro conversion. 

Tip 2: Identify where your tests will have the biggest impact

There are unlimited opportunities for A/B testing in your conversational experience, so it’s important to focus on the right opportunities that will have the greatest impact.

The first step to doing this is identifying experimentation opportunities that will be exposed to a large audience and are on the path to the success metric that you are looking to drive. This will not only help to ensure that you are able to reach a significant test result, but also means that if you run a test that has a winning variant, it will have a bigger impact on your overall business objective once you roll the winning variant out more broadly.

When it comes to your conversational experience, try starting by running experiments on:

  • A Proactive Campaign that is displayed to a large audience
  • A high-volume Answer in your bot

AB Testing Blog 2

Tip 3: Develop hypotheses and prioritize them on a roadmap

Next, you’ll want to ideate, identify, and document a series of hypotheses that you want to test. A hypothesis is a prediction you create prior to running an experiment. As part of your hypothesis, you will want to identify a problem, the solution, and the result that you expect the experiment will deliver.

For example, if you are a B2B SaaS brand that’s looking to optimize the number of meetings booked with qualified leads, your hypothesis might look something like this:

  • Problem: Only 56% of leads book a meeting after being qualified in my bot. This is because they have to go to a “book a meeting” form that’s on a separate webpage after being qualified.
  • Solution: Add support for booking a meeting directly in my bot through Ada’s Calendly integration, so leads don’t have to go to a separate webpage to complete this action.
  • Expected Result: The conversion rate for meetings booked will increase.

use_cases_03

Once you’ve documented a list of potential hypotheses to test, you will want to prioritize these hypotheses to create an experimentation roadmap. When it comes to how to prioritize experiments, this will largely depend on your business. Beyond the size of the audience that will be exposed to the experiment, here are some additional potential criteria to consider:

  • How long does it take for a visitor to notice the change?
  • Is it designed to reduce friction points for your customers that get in the way of them completing the desired outcome?
  • Is the issue something that was discovered via user testing, digital analytics, or qualitative customer feedback?
  • Is the experiment easy to implement?

Tip 4: Be patient

We know. It can be tempting to “just take a look” at your A/B test to see how it’s doing. But early A/B testing results can be misleading, and constant monitoring of your experiments often results in decisions being made based on unreliable data. Early discrepancies between the performance of two or more variants are common, but almost all of these discrepancies are the result of a temporary imbalance that gets corrected when more sample is introduced to the experiment.

Instead, we recommend that you focus on creating a standardized process from design to execution to reporting with clear stage gates. Create a standard reporting format that includes expected performance and impact on costs or revenue.

And remember, an experiment without a winning result can be just as important as an experiment with one. It helps you to understand what types of changes aren’t having an impact and to disprove any invalid hypothesis. Document and learn from your failures in the same way that you would your successes, and use them to fuel and refine your future A/B Testing roadmap.

engage footer2