Business growth rarely occurs overnight. Rather, it is the result of consistent iteration, testing, and learning, the outcomes of which compound over time. For many teams, the foundation of this practice is A/B testing.
A/B testing is the process of testing two things against one another to see which is more successful at accomplishing a specific goal. A/B testing in Google Ads could mean comparing variations of copy, creative, targeting, or landing pages to see which generates more sales.
With marketing budgets becoming more scrutinized, it’s important to do everything you can to maximize the return of your Google Ads investments. In this article we’ll walk through the foundations of A/B testing and share six best practices for A/B testing in Google Ads.
A/B testing is a type of random control experiment that compares two variations to see which is more successful at accomplishing a specific goal. Marketers often employ A/B testing to evaluate how variations in ad creative, copy, and targeting impact conversion outcomes, such as clicks, sign-ups, or purchases.
Often, version “A” is an incumbent in the campaign, and can be viewed as the control in the experiment. Version “B” is a variation of version A. The purpose of the A/B test is to determine the impact that the variation has on conversion outcomes.
A/B testing in Google Ads is a great way to improve your programs over time. Luckily for marketers, Google offers pre-built custom experiments capabilities that make it easy to get started.
Custom experiments allow you to create an alternate version of an existing campaign and compare how the alternate performs against the original campaign over time. The alternate version will share the original campaign’s traffic and budget, reducing the number of variables that can impact your test results. As your experiment is running, you can visit the Experiment summary table in your Ads Manager and review performance data for each campaign version. You can also choose to end your test at any time and if your alternate version is a winner, replace the original campaign with the alternate.
See Google’s detailed instructions on how to create a custom experiment here.
Google Ads provides an array of information you can use to understand how your campaign variant is performing in comparison to the original campaign. By default, Google will display performance results for Clicks, CTR, Cost, Impressions, and Conversions, however there are additional metrics that you can choose to include in the performance report.
For each performance metric, Google will show an estimated performance difference, as a percentage, between the variant and the original campaign. For example, if you notice +2% for Conversions, it’s estimated that your experiment variant received 2% more clicks than the original campaign. Google will also show a second value that represents possible performance range based on the confidence level that you have selected.
If the results of your A/B test are statistically significant, Google will add a blue asterisk next to your performance results. Statistical significance helps assess whether the results of an experiment are due to chance, or if they’re attributable to the conditions of the test and would remain consistent should the test continue running.
1. Pick the right goal
The purpose of an A/B test is to evaluate which variant is more successful at accomplishing a specific goal. The goal that you use to measure success, therefore, is extremely important for the validity of the experiment.
First, you should choose a goal that can be directly impacted by the variations you are testing. If evaluating ad creative, for example, you can understand which variant performs better by measuring clicks on each ad variant. Choosing sales as your goal, however, may give you an imprecise understanding of performance as purchases happen further down the funnel and may be influenced by landing pages, product quality, and customer support.
Second, it’s important to make sure the goal you’ve selected can be accomplished a significant number of times within the test period. Google advises that goals need to be accomplished 50-100 times before an experiment can reach statistical significance. If a luxury boat retailer making a dozen sales per quarter ran an A/B test optimizing for sales, it would likely need to wait over a year to generate enough sales to achieve statistical significance, by which time the results may be irrelevant.
2. Run your test for long enough
With the right goal selected, it’s important to keep your test running until you achieve statistical significance. Google recommends running your test for long enough to generate 50-100 conversions per ad group. For campaigns with bottom of funnel conversion objections, such as Sales, this may take longer than campaigns with top of funnel objectives, such as clicks. Although results may initially look promising, it’s important to continue
3. Be aware of market conditions while testing
Though you should aspire to test and learn as frequently as possible, it is important to be aware of market conditions when planning your tests. Abnormal periods such as Black Friday and holiday shopping periods may lead consumers to behave in ways that they wouldn’t year-round. Test results achieved in these periods, therefore, may not be replicable at other, ‘normal’ times of the year.
4. Test one variable
Perhaps the golden rule of A/B testing–ensure you’re only testing one variable across your A and B campaign variants. While there may be many components of your campaigns that you’d like to evaluate, testing multiple variables in one A/B test can lead to uncertainty about which variable is responsible for generating the experiment results.
For example, a landing page A/B experiment testing both a different CTA copy and CTA button color might leave marketers confused about whether it was the new copy or button color that improved results. By testing one variable, it’s easier to attribute experiment results to the variation.
5. Leverage Google Ads native testing capabilities
Don’t make A/B testing in Google Ads harder than it needs to be. Google Ads offers several powerful testing capabilities that make it much easier to run experiments on the platform.
When you create a Custom Experiment with Google Ads, for example, Google automatically dedicates a portion of the original campaign audience and budget to the test variants. If you were to create a separate campaign for your test variant instead, the audiences for each campaign may differ, compromising your results, and you may not be able to control campaign budget as easily.
6. Develop of testing culture
Although a single A/B test may uncover important learnings from your team, the true value of A/B testing comes from disseminating the learnings from many tests over time. Turn a single experiment into a testing culture by centralizing your learnings with your teammates, and prompt others to run their own tests.
As you develop a library of learnings, don’t be afraid to re-test–the results you discovered last year may no longer be valid in the present.