Dec 20, 2022

How to Do A/B Testing in Marketing

As marketers, we all know it’s vital to look at the data. And when it comes to digital advertising, A/B testing is one of the best ways to get rich data about what works and what doesn’t on your website, in your emails, and in your ad campaigns.

And while A/B testing can get a little complicated, at its core the idea is very simple. So let’s cover the ABCs of A/B testing.

  1. What is A/B testing?
  2. Split Testing and Multivariate Testing: What’s the Difference?
  3. A/B Testing Tools
  4. A/B Testing Best Practices
  5. Fine Tune Your Testing

What Is A/B Testing?

A/B testing is a research methodology that takes a basic concept we all remember from high school science – testing a variable – and applies it to online marketing. In its most basic form, A/B testing takes two versions of something and compares them to see which performs better.

So what do you compare? Well, it could be anything:

  • The text of a CTA on a landing page
  • The size of a font on a web page
  • The color of a button on a display ad
  • The copy of a promotional email

The sky’s the limit here. All you need is some variant you want to test and a measurable metric to evaluate performance (usually something like how many visitors to your site clicked on a subscribe button).

For example, say you’re publishing a new display ad and you want to figure out which of two CTA buttons works better. One says “SHOP NOW” and the other says “BROWSE OUR STORE.” You run an A/B test and find that people click on the “SHOP” version 0.80% of the time out of 200k impressions, but they click on the “BROWSE” version 1.00% of the time. There you go!

Pretty straightforward, right? Well, unfortunately, the devil’s in the details. If you’ve ever read about randomized double blind studies in medicine, you can probably guess that there are a lot of other factors that come into play. For instance,

  • The samples need to be randomized to control for other factors that could affect the outcome. Maybe one button looks better on mobile, even though that’s not what you were testing for. A good test will account for these possible discrepancies.
  • You need sample sizes that are large enough to determine the statistical significance of your results. A click-through-rate of 3% vs. 5% would be a slam dunk, but the 0.80% vs. 1.00% we brought up in that example? Once you’ve allowed for margins of error, that distinction might not be that substantial. The statistics behind determining significance can get tricky pretty quickly.

Split Testing and Multivariate Testing: What’s the Difference?

You may have also heard the terms split testing and multivariate testing. What are they?

Well, split testing and A/B testing are used almost interchangeably. Sometimes the contexts are slightly different: A/B testing gets used more often to refer to the testing of a single variable in a new channel or campaign, while “split testing” refers to multiple new variations on a pre-existing asset. If those differences sound a little hard to keep track of, you’re not alone: either description works about equally well.

Multivariate testing is different, however. Multivariate testing examines what happens when multiple elements of a page are modified at the same time to find out what combination works best. (Think of it as A/B/C/D testing).

This is useful because sometimes certain page elements need to be seen in conjunction. Maybe you’ve decided the button on your landing page should be red instead of green and in Times New Roman instead of Helvetica. But maybe Helvetica suddenly looks better on the red background than it did on the green. Sometimes A/B testing each of these elements individually, rather than in conjunction, provides an incomplete picture of the whole.

Again, this can get tricky though, because the more variables you’re testing, the greater the number of variations and the larger the sample size needs to be in order to determine statistical significance.


A/B Testing Tools

Let’s take a look at some of the most valuable A/B testing tools online.

Facebook A/B Testing

Facebook includes the option to run A/B testing in its Ad Manager toolbar and the company recommends a set of best practices for beginners, including:

  • Testing only one variable at a time for more conclusive results
  • Focusing on specific, measurable hypotheses
  • Determining your ideal audience, budget, and time frame for the test

Hubspot A/B Testing for Facebook

Hubspot has a set of guidelines for A/B testing on Facebook, including a downloadable guide and kit. The page includes a step-by-step guide for Facebook’s Ads Manager toolbar.

Google Ads A/B Testing

With Google Ads you can set up what Google calls custom experiments. This feature lets you propose and test different elements of your search and display campaigns and split budgets evenly between them. You can determine how long you’d like them to run, what budget you want them to have, and, once you’ve evaluated your results, whether you’d like to apply the experiment to the rest of the campaign.

While you can only run one experiment at a time, you can schedule up to five experiments for a campaign and decide what percentage of your original campaign’s budget that you want to allocate to each experiment.

Hotjar – Monitoring A/B Testing with Heatmaps

Hotjar has an application that allows you to monitor your A/B testing using heat maps – visualizations using warm-to-cool color spectrums that show which page elements got the most user attention. Did people scroll right past your new button? Did the mouse hover over it 80% of the time, regardless of whether people clicked? These heat maps offer invaluable information about how people interact with your site that can supplement the basic “did they or didn’t they” click metric that otherwise evaluates the performance of your pages. This kind of detailed user data will help you refine further A/B or multivariate tests down the line.

Fathom – UTM Parameters

With Fathom you can use UTM parameters to better understand where traffic to your pages is coming from. UTM stands for “Urchin Tracking Module” – the long string of text at the end of a URL that encodes information about how a user made it to your site. This way you can track exactly how people got to your page (did they follow a social media link? A paid display ad? A link from an email campaign?). You can apply UTM parameters to the alternate versions in your A/B test to track where people go once they click on your links.


A/B Testing Best Practices

One of the beauties of A/B testing is that it can be applied to any number of different channels. You can use it to test the effectiveness of display ads, landing pages, email marketing, etc. There are lots of options.

A/B testing is particularly useful when applied to paid media. When you’re paying to display something, naturally you want to get the most bang for your buck. A/B testing lets you test and examine each individual part of your paid ad in order to optimize their performance.

For example, on a Pay-Per-Click (PPC) campaign you can use A/B testing to test the headline, the link, the body text, and even the keywords the ad displays for.

Likewise, for a display ad you can test different images to see which draws more attention. Once you’ve optimized the text and design of the ad itself, you can A/B test for different audiences, different segments of your audience – even for different times of day.

Regardless of how you decide to implement A/B testing, there are some best practices:

Start Testing with One Variable

Multivariate testing can come in handy once you’re more familiar with the process, but to start out, stick with just one variable. The more variables you have at once, the less certain you’ll be which one made the difference. When you only change one element at a time, you’ll have greater clarity about your results.

Give Tests Enough Time to Produce Meaningful Results

A test is only successful when the results you get are statistically significant. That takes time! A single or two-day test probably won’t give you enough meaningful data to draw any helpful conclusions. Let your test run for at least a week or more.

Set a Clear Goal

Know from the outset what metric you’re using to determine the success or failure of your experiment. Lay out a hypothesis about the results. The clearer you are at the outset about what you’re testing, the better you’ll make sense of the data afterwards.


Fine-Tune Your Testing

At Cordelia Labs, we know how to craft a successful experiment. Our paid media strategists isolate the important variables, devise targeted testing, and analyze the resulting data to make sure your paid ad campaigns are sharp, focused, and yield the best return on investment.

Don’t settle for ads that only sort-of work. With our expert digital marketing your paid ad campaign will run like a well-oiled machine, delivering optimal performance that exceeds your expectations. Schedule a call today to find out how we do it.