How to run AB tests

A/B testing (also known as split testing) is the gift that keeps on giving. When you run A/B tests, you gain valuable insights into how your users respond to different messaging, CTAs, and even creative; anything really. The sky's the limit! Running tests allows you to optimise for continuous improvement – it’s good for you, and it’s good for your visitors.

Steps to run an effective split test

To run an effective A/B test, you should use a five-step framework. This helps you follow a set process for each test and gather learnings.

  1. 🤔 Research and hypothesise
  2. 🧪 Create your variations
  3. 🥼 Run your experiment
  4. 🤓 Analyse results
  5. 🧬 Apply and/or iterate

🤔 Research and hypothesise

In split testing, like anything else, prior preparation sets you up for success. Before you get click-happy with setting up variations, you should take a step back and look at the wider picture. This is your opportunity to turn into a mad scientist and dive into data.

Not sure where to start? Try something that will give you quick wins, such as:

  • low conversion rates (like underperforming open rates on your emails or few enquiries through your website)
  • high drop-off rates (like abandoned carts or bounce rates).

This is data you can use to determine both your biggest challenges and your biggest opportunities. If, for example, you have a landing page that has a consistently high amount of traffic, but underperforms for revenue conversions, you may want to test product placements on that page; in that instance, your goal would be transactions generated from the landing page.

It’s likely that you’ll generate a huge list of potential opportunities, so start with your biggest opportunities and low-hanging fruit. Another way to prioritise marketing collateral that you want to test is to create a testing backlog and use scoring framework commonly used in product; a Reach, Impact, Confidence, Effort (RICE) scoring framework.

Once you have done your research and have a thorough understanding of how users are interacting with your current messaging, grab your smoking pipe because it’s time to generate a hypothesis.

🧪 Set up your split test variations

When you’re setting up your A/B test, you need one or more variations to test against the existing version (the control). When you set up more than one challenger variation, it’s known as A/B/n testing.

Many A/B testing tools will guide you through the variant set up. Some even have a visual editor that makes creating your variants easy.

Regardless of your testing tool, quality assurance is an important step to ensure your test gives you actionable insights.

🥼 Run your experiment

Now you’ve come to the part that can take the longest; it’s time to wait for your experiment to run.

Once you kick off your experiment and wait for visitors to engage with your variants, all you can do is sit back and watch the conversions roll in. Your testing software should be randomly assigning visitors to the control or the challenger/s, measuring and comparing each interaction with the test variants.

It’s a bit of a waiting game here; while you don’t want to end your test without giving it a fair chance to gather valuable insights, some tests will never reach statistical significance.

It can also be tempting to check in on your variants daily, but if your volumes aren’t high, your efforts could be better spent elsewhere while your sample size builds to a level where you can feel confident in the results. Due to the different personas that might interact with your test variants over a period of time, it’s best to run your test for at least one calendar week.

If your test has been running for several weeks but you haven’t seen a large number of traffic or conversions, it might be time to pause this test and look for some more fruitful results with another test.

Conversely, if you’ve run your test for at least one calendar week and have seen a significant number of conversions for at least one variant, it’s time to analyse your data.

🤓 Analyse your test’s results

A word of caution before proceeding: statistical significance is not the only part of the equation. Why? Simply due to conversion rates.

If variation A has had 1000 visitors and 250 conversions, that’s a 25% conversion rate. If variation B has had only 1 conversion, but also only 1 visitor, that’s a 100% conversion rate.

Because many statistical significance calculators don’t take this into account, it can be easy to overlook the obvious: in this instance, you simply don’t have enough information to know which variant will truly convert at a higher rate.

Once you’re satisfied with your sample size and conversion rate and have enough results to gather meaningful insights from, go ahead and use our A/B test calculator to determine the results.

🧬 Apply or iterate

Hurrah, your test was significantly significant and you are ready to roll out your winning variation to 100% of your visitors.

Let’s talk about A/B tests that are easy for you to implement from a marketing perspective but are significantly more complicated to build out in full (for example, perhaps making changes to your checkout flow).

If you’ve ever asked a developer to “just push some buttons” to make what you consider to be relatively easy changes to your website, you’ve probably seen an answering look of disgust.

Some changes that appear deceptively simple can be complex engineering work that requires a lot of skill and experience to implement – and it’s certainly never as easy as clicking a few buttons. (Really, that’s not a good way to make friends with developers.)

Luckily, there is a bit of a hack you can use to give your website people time to develop a robust solution while also looking after your commercial interests: in your A/B testing tool, turn off your losing variation, and switch your winning variant to be distributed to 100% of your visitors.

Once your test is finalised, you might want to test another hypothesis, another channel, or another cohort of users.

A/B testing is the game that never ends.

Good luck!