< Back

Blog

Limitations of traditional A/B testing

March 30th 2022

Lucy Russell

By Lucy Russell

Head of Product Management

Limitations of traditional A/B testing - featured image

A/B testing, also known as split testing, is a method of testing that compares two versions of a variable (for example a web banner or whole web page), showing each version to equal numbers of users at random to determine which performs better against a business goal.

A/B testing can be useful when you want to test two specific variables against each other. However, traditional A/B testing will only take you so far. Keep reading to learn more about two key limitations of A/B testing and discover an alternative approach.

1) Time and resources

Running an A/B test can take  longer than other methods of testing and can be a drain on resources and time – two things marketing teams often lack. The time it takes for a traditional A/B test to run to significance (particularly for low-traffic sites), to review the results and to deploy the winner can result in lost revenue. This time delay may even mean that a significant winner cannot be found before the campaign has finished. This is even more of a problem if the changes you’re testing are quite minor.

Traditional A/B tests also often require manual interpretation and deployment of the ‘winner’, which in turn demands a level of expertise and experience that might not be available in your team.

2) Fluctuating winner

Another limitation of A/B testing is that it requires the tester to extend the results indefinitely into the future. Traditional A/B tests assume an unchanging world view and don’t take into account changes in trends and consumer behavior and the impact of seasonal events, for example.

In reality, the winner may change over time depending on influencing factors such as the ones mentioned above.

The alternative

Speed of getting to the winner is key for maximum ROI. And continuing to check to ensure the winner is still performing better than other options ensures that performance doesn’t drop off over time.

That’s exactly what our automatic optimization tool does: it’s fast to find the best-performing items and automatically deploys them to the highest traffic. The tool then continues to allocate a small amount of traffic to the underperforming items and to monitor their performance. If it detects a noticeable change in an item’s performance, for example due to a change in influencing factors, it will automatically shift traffic around to ensure that the now best-performing items are shown the most.

Book a demo to see our automatic optimization tool in action and learn more about the other features in our Advanced Testing & Optimization module.

Book a demo to get started with testing and optimization today

Lucy Russell

By Lucy Russell

Head of Product Management

As Head of Product Management at Fresh Relevance, Lucy works closely with our customers and the wider team to shape the product roadmap and oversees the roll-out of all new features.