If you're running A/B tests on social ads, emails, your website, or your app, it can be difficult to know if your tests impact on your bottom-line results. Enter your tests' variant sample sizes and conversions below to determine if your split test was statistically significant.
Your test results
"B" converted 0.0% better than "A". The changes in "B" did improve your conversion rate. We are 0.0% confident that the changes you made in "B" caused the improvement.
Want to learn more about running A/B tests?
What is statistical significance?
One of the important factors in conversion rate optimisation is statistical significance – yet sadly, online resources for statistical significance are generally rife with examples that aren’t at all aligned with business goals. Here's how to understand statistical significance from a business perspective.
How to run A/B tests
A/B testing (also known as split testing) is the gift that keeps on giving. When you run A/B tests, you gain valuable insights into how your users respond to... well, anything really. The sky's the limit! Running tests allows you to optimise for continuous improvement – it’s good for you, and it’s good for your visitors. Here's how to do it.
How to calculate statistical significance
Statistical significance is the probability that your split test’s result was due to random chance. Another way of thinking about this is that the smaller the probability that the result was random, the more confident you can be that your changes are what caused the result. Here's how to calculate it.