Reading Time: 7 minutes
The 13 Most Common A/B Testing Mistakes (And How to Avoid Them)
4.6 (92.08%) 96 votes

On average, only 12.5% of A/B tests produce significant results. More worryingly, over half of first-time A/B test users say they are dissatisfied with the results of their tests. Finding the right hypotheses to test and getting reliable results is not easy. So, what are the most common A/B testing mistakes, and how do you avoid them?

In recent years, A/B testing has become a standard practice within digital marketing. Despite this, over half of A/B testers are not satisfied with their results. A successful optimization strategy is clearly not as simple as buying a subscription to one of the major A/B testing tools. Some of the biggest causes of dissatisfaction are common and easily-avoidable A/B testing mistakes. 

Here is a common scenario illustrating the pitfalls of A/B testing:

  • You buy a subscription for an A/B testing tool. It’s well reviewed, so it must work, right?
  • The initial package is not expensive, and it allows you to test thousands of visitors…

3 hours later …

  • Your “Add to Cart” button is a slightly darker green
  • Your credit of 5000 visitors is exhausted
  • The tool says your test is not reliable

So, what are the most common A/B testing mistakes and how can you avoid them?

A/B Testing Mistakes and Common Errors

Changing the size of a button or the colour of some text is often used as an example of an A/B test. Bloggers and some software developers present this as an effortless way to attract more conversions. However, there is a saying about when things seem too good to be true…

Unless your site was badly designed to begin with, such superficial changes are unlikely to increase conversions. The only winner with this type of test is the software provider, as you consume visitor credits and spend more on their software.

Top Tip: Start without assumptions and set reasonable goals. You will achieve more in the long run. 

A/B testing is extremely powerful, but it is also complicated. Without a conversion optimization methodology, including structure, preparation and analysis, no amount of testing will provide reliable results. Investing in an A/B testing tool without establishing a methodology is a waste of time and money.

Like any other form of marketing, a strong A/B testing strategy requires you to think carefully about your customers, your product and your market. 

Top Tip: Establish your goals, metrics, and timetable before considering which A/B testing solution is right for you.

AB Testing presents us with a paradox: On the one hand, you can test anything that occurs to you. On the other, it is hard to know where to start. 

A good A/B testing strategy should help you prioritise your A/B tests. Starting with the wrong variables will cost you time and money. 

So, how do you decide what to test?

A/B Testing and Conversion Rate Optimization agencies offer different methods for prioritising your A/B tests. Usually, they involve the following criteria: 

  • Potential: What is the potential improvement from a successful test
  • Importance: What is the quality and the volume of traffic on the tested page
  • Facility: What resources are needed to (1) run the test, (2) implement the winning variation

We like to consider an additional factor:

  • “Time to market”: how long will it take between the launch of the test and the final implementation.

Top Tip: Make sure your CRO or A/B testing team develop a clear set of priorities, based on Potential, Importance, Facility, Time.

A/B testing Priorities

Many website management or CMS systems offer A/B Testing functions. However, to perform A/B tests seriously, you need a purpose-built testing tool.

The number of A/B testing tools on the market is significant, and the list seems to grow every year. Fortunately, we have put together a list of 10 Questions to help you choose A/B testing and split testing software

Top Tip: Forget add-ons or plugins, you need a purpose-built tool.

Buying a software package with a few thousand visitors is useless. To get significant results you would need to start with a conversion rate of 10% and achieve an uplift of + 15%. To put this in context, it is extremely rare to have a conversion rate over 5%, and to improve any conversion rate by + 10% would be very impressive.

The point is, with A/B testing, large volumes of traffic is a necessity, not a luxury. Software packages that do not reflect this should be avoided. 

Top Tip: Before making this classic A/B testing mistake, consult our guide on test sample sizes

The conversion rates used to analyse A/B tests are defined according to a particular goal. This is usually a macro-conversion (sales or lead), but can also be a micro-conversion (addition to the basket, for example).

Choosing the right objectives, and therefore the right KPIs, is essential to the integrity of your results. Focusing on irrelevant KPIs is a common A/B testing error.

Top Tip: Decide what your KPIs should be before designing your test.

As with an Adwords or Native Ads campaign, it is important to establish basic segmentation within an A/B Test. One of the most important distinctions is between Mobile and Desktop traffic.

If you consult your Google Analytics, you will find that your mobile conversion rate is different to your desktop conversion rate. Users behave differently on Mobile and Desktop, so calibrate your A/B tests to take into account this parameter.

Top Tip: Mobile browsing now accounts for the majority of global web traffic. It is essential to understand how mobile visitors experience your site.

What will your existing customers think if they discover a new layout each time they visit your site? Some may love the new design, but browsers expect a website to be recognisable each time they visit. Disorientating regular customers is one of the most harmful A/B testing mistakes that first-time testers make.

The best way to avoid disorientating people is to segment your test, showing variations to new visitors and original pages to returning customers.

Top Tip: Segment your tests to reduce the impact on user experience.

By nature, an A/B test involves comparing A with B. In other words, the process involves testing one thing at a time. 

This does not mean only ever testing one variable; it is worth testing any reasonable hypothesis. However, running a number of tests without structuring the process will extend the time each test takes to produce reliable results. Again, this comes down to the question of sample size.

To test several variations at the same time you need enough traffic. Otherwise, test one high-priority variable at a time. Attempting to evaluate every possible variation and the relationships between them often leads to achieving no results; a common A/B testing error.

Top Tip: Most e-commerce sites have a volume that allows them to test 1 or 2 variations (maximum) at a time.

Each A/B test influences some of your traffic. If you successively influence the same visitor with different variations, how do you analyse the results? Running parallel tests without meaning to is one of the most common A/B testing mistakes.

Successfully testing parallel variables is possible, and is known as Multivariate testing. However, it requires a large volume of traffic and an executive testing platform. It is a complicated process, and not one for the faint-hearted.

Top Tip: Focus on strong hypotheses, rather than the number of tests. 

A/B testing is essential for verifying hypotheses. Even so, you should always check the validity of your tests, otherwise you risk waking up every morning thinking you have doubled your sales when this is not the case!

Calculating statistical significance can be a complicated business, and there is more than one way to conceptualise a test’s p-value. We use a hybrid approach to statistical significance, which provides faster results whilst minimising the possibility of Type I Error.

Top Tip: Download the FREE guide to the Hybrid Statistical method, and make sure you understand Bayesian and Frequentist statistics.

To be clear, “too early” simply means that you stop your test before the results are unreliable. This is one of the most familiar A/B testing mistakes, and possibly the most important to avoid. We have a strict “No-Peeking” rule when running a test, so that we are not tempted to end a test before statistical significance is reached. 

Top Tip: Once you start a test, let your platform decide when to end it. 

A/B testing is a fantastic tool, providing you have two essential resources:

  • Time
  • Traffic

If you do not have these resources, it will be impossible to achieve reliable test results.

A/B Testing mistake - low traffic

However, there are other ways to optimize you website. These tools are far more suited to the time-poor and traffic light…

For example, small and medium sized businesses can achieve real uplift through customer experience research, copy optimization and customer journey analysis.

There is a wide range of free tools available to help you optimise your site, from heatmaps to customer surveys. These can increase conversions without using as many of your resources.

Top Tip: Starting with customer experience is the easiest way to avoid most A/B testing mistakes.

Conclusion

A/B testing mistakes are easily made. In fact, almost every testing expert has had to learn the hard way! However, by sticking to a few basic principles and browsing our “Top Tips”, you can avoid the worst of them. 

If you are considering A/B testing for your eCommerce site, explore our A/B Testing Guide for 2019.

by Jochen Grünbeck

Jochen is co-author of "Smart Persuasion - How Elite Marketers Influence Consumers (and Persuade Them to Take Action)". After an MBA at INSEAD, he began his career at Airbus. Then, he moved on into Management Consulting, focusing on purchasing and negotiation strategy as well as cost optimisation projects for bluechip and midsize companies in France and Germany. His experiences led him to specialise in persuasion psychology, behavioural economics and conversion rate optimisation.