Business growth rarely occurs overnight. Rather, it is the result of consistent iteration, testing, and learning, the outcomes of which compound over time. For many teams, the foundation of this practice is A/B testing.
A/B testing is the process of testing two things against one another to see which is more successful at accomplishing a specific goal. A/B testing in Facebook could mean comparing variations of copy, creative, targeting, or landing pages to see which generates more sales.
With marketing budgets becoming more scrutinized, it’s important to do everything you can to maximize the return of your Facebook Ads investments. In this article we’ll walk through the foundations of A/B testing and share five best practices for A/B testing on Facebook.
A/B testing is a type of random control experiment that compares two variations to see which is more successful at accomplishing a specific goal. Marketers often employ A/B testing to evaluate how variations in ad creative, copy, and targeting impact conversion outcomes, such as clicks, sign-ups, or purchases.
Often, version “A” is an incumbent in the campaign, and can be viewed as the control in the experiment. Version “B”, on the other hand, is a variation of version A. The purpose of the A/B test is to determine the impact that the variation has on conversion outcomes.
1. Create an A/B tests in the Experiments tool
Creating an A/B test in the Experiments tool allows you to set up an A/B test that compares two or more ad campaigns. It’s particularly useful when you have active campaigns that you’d like to test against one another, or when you’d like to create each campaign with specific parameters before starting your test. To create an A/B test in the Experiments tool, simply navigate to Experiments, click create a test, and select A/B testing as your test type. From there, you will be able to select the campaigns you’d like to A/B test, define a timeframe, and establish the key metric that will be used to measure campaign performance. For more information, see step-by-step instructions for how to create an A/B test in the Experiments tool here.
2. Create an A/B test in Ads Manager
Creating an A/B test in the Ads Manager makes it easy to create an A/B test based on an existing campaign. This method is preferable when you’d like to test variations of an existing campaign, such as different copy or creative. Within Ads Manager, select an existing campaign and click A/B test in the toolbar. From there, you will be given the option to either duplicate an existing ad within the campaign or select an existing ad to test against your control ad group and create your A/B test. See step-by-step instructions for creating an A/B test in Ads Manager here.
3. Create an A/B test by duplicating and Ad or Ad Set
You can also create an A/B test at any time by duplicating an existing Ad or Ad Set. When you select an existing Ad or Ad Set and click duplicate within the toolbar, a prompt will appear asking you if you’d like to run a Split Campaign (for our purposes, the same thing as an A/B test). Once you’ve selected to run a Split Campaign, you’ll be presented with an Ad or Ad Set duplicate that you can edit based on the variations that you’d like to compare in your test. With Ad/Ad Set versions A and B created, you can define the test time frame, key metric, and budget and launch your test. See step-by-step instructions for creating an A/B test by duplicating an Ad here.
1. Start with a hypothesis
Running A/B tests as often as possible is a great way to optimize your Facebook Ads programs over time. Although you should be testing early and often, be careful not to abandon the fundamental experiment methodology. Before creating any A/B test, it’s important to define a hypothesis. According to Merriam-Webster, a hypothesis is an idea that is proposed for the sake of argument so that it can be tested to see if it might be true. Defining a hypothesis before launching an A/B test is important because it gives clear direction on the scope and success criteria for the experiment. Failing to establish a hypothesis before running an A/B test can lead to ambiguous test criteria and inconclusive results.
2. Test one variable
One of the reasons that hypotheses are so important is that they give clear direction on what is being evaluated in the experiment. When running an A/B test, it’s important to establish the one variable that is being compared across variants, and ensure all other variables are as consistent as possible. While there may be many components of your campaigns that you’d like to evaluate, testing multiple variables in one A/B test can lead to uncertainty about which variable is responsible for generating the experiment results. For example, an Instagram Ad A/B experiment that tests both differences in ad copy and ad creative might leave marketers confused about whether it was the new copy or creative that improved results. By testing one variable, it’s easier to attribute experiment results to the variation.
3. Run your test for long enough
Once your test variants are created, it’s important to allow your A/B test a long enough time frame to achieve statistical significance. Facebook Ads Manager allows tests to run for a maximum of 30 days, and recommends a minimum of 7 days to produce conclusive results. That being said, you’ll need to align your test length to the conversion objective being measured in the test. If the conversion event you are measuring in your test, such as a sale, only occurs a handful of times per week historically, you will likely need to run your test for multiple weeks to achieve conclusive results.
4. Use Facebook A/B testing functionality
While it may seem faster to simply measure the results of two active Facebook campaigns and draw a conclusion, it’s important to leverage Facebook’s A/B testing functionality when directly comparing campaign performance.
When you create an A/B test in Facebook Ads, for example, Facebook ensures that your audiences will be evenly split and statistically comparable, leading to better results. Informal testing, on the other hand, can lead to overlapping audiences and can confuse Facebook’s ad delivery.
5. Develop a testing culture
Although a single A/B test may uncover important learnings from your team, the true value of A/B testing comes from disseminating the learnings from many tests over time. Turn a single experiment into a testing culture by centralizing your learnings with your teammates, and prompt others to run their own tests. As you develop a library of learnings, don’t be afraid to re-test–the results you discovered last year may no longer be valid in the present.