Every designer, at
least once in his or her life, has faced a situation where they had several
solutions for the problem and wasn’t sure which solution would work the best
for users. Imagine a situation where you’re working on a landing page for a
product and have a few different layouts to choose from. All of them look great
and it’s hard to choose one that you will use in your final product. Thankfully,
there is a simple solution for this problem – A/B testing.
In this article, you
will read about the technique of A/B testing and how it is applicable to the
product design process.
What is A/B testing
A/B testing (also
known as split testing or bucket testing) is the act of running a simultaneous
experiment between two or more pages or screens to see which performs the best.
And by ‘performs’ we usually mean converts.
A/B testing can improve your bottom line
Proper A/B testing
can gather empirical data that will help your team figure out exactly which
design decisions or marketing strategies work best for your product. A/B
testing is extremely valuable for product design teams because it helps them
learn why certain elements of their experiences impact user behavior. This
knowledge will help teams make data-informed design decisions as well as be
more specific in conversations with your stakeholders (you will use “we
know” instead of “we think”).
Where we can use A/B testing
A/B testing is
applicable to almost any design decision. Headlines, calls to action, images,
search ads – you can test everything that you can change. Of course, the fact
that you can test everything doesn’t mean that you should test everything. It’s
vital to focus on the design decisions that provide the maximum value for you
and your users.
How to run A/B testing
A/B testing is a
relatively simple procedure. All you need to do is prepare two (or more)
versions of a test page/screen and send users to the pages or screens. Usually,
user traffic is randomly assigned to each page variant based upon a
predetermined weighting.
Below is a simple
6-step framework for A/B testing:
1. Use analytics data to identify areas for optimization
Your analytics can
provide valuable insight into where you can start optimizing. If you want to
optimize a conversion rate on your app or website, it’s recommended to start
with highly trafficked areas because they will help you gather valuable data
faster.
2. Define conversion goals
The goal is an
action that you count as a conversion. For example, in the context of a landing
page, the goal can be signing up for the product updates.
3. Generate hypotheses on how to improve the conversion
Prepare a list of
ideas on how to improve the current conversion rate.
As soon as you
prepare the list, you need to review each idea and evaluate it with your team
(consider both expected impact and the difficulty of implementation). In the
end, you will have a prioritized list of ideas that you will use in your
design.
4. Create design variations
Start with the top
priority ideas and make the desired changes to an element of your app or website.
5. Run experiment
Allow real-world users to interact with your design variations and track their progress. A few tools where you can experiment with A/B testing are Adobe Target, Optimizely, and Crazy Egg.
6. Analyze results
Once your experiment
is complete, analyze the results. The tool that you will use for A/B testing
should help you determine whether changing the experience had a positive,
negative, or no effect on visitor behavior.
A/B testing checklist
Despite the
procedure of A/B testing being relatively simple, it’s vital to remember a few
important rules:
Decide what exactly you want to test
The first thing to
do when planning an A/B test is to figure out what you want to test. A/B
testing works best for one-variable design decisions when you need to test one
thing at a time. You need to create two different versions of one piece of
content, with changes to a single variable. For example, in the context of a
button, a single variable can be a button’s color, shape, label, etc.
Testing more than
one thing at a time (i.e. headline and call to action buttons) is a
multi-variate test that is more complicated to run.
Define metrics you want to collect
Before you start
testing you should have a clear idea of the results you’re looking for. It’s
vital to know your baseline result (i.e. the current conversion rate).
Run test simultaneously
With A/B testing you can’t test one variation today and another one tomorrow. Why? Because some variables can change drastically due to a time shift. For instance, the number of visitors and their interests can vary drastically depending on the day of testing. If you’re testing your landing page on day one of a promo campaign, the results of the testing can vary significantly with day two. That’s why it’s important to run simultaneously to account for any variations in timing.
Give a test sufficient time
Not giving each test sufficient time to run is a typical
problem that many product teams face. This problem happens when the team
incorrectly decides the time for the testing. As a result, it has a limited
number of test participants and non-representative test results at the end of
the testing period. Considering the importance of A/B testing and the audience
you have, it’s worth dedicating a few days or even weeks to properly conduct
tests.
Generally,
the time for testing can be calculated based on two variables:
- Average Daily Visitors: Average daily unique visitors your tested page received (i.e. 10 000)
- Number of variations: Total number of screen or page variations including the control version. (i.e. 3 variants)
It’s good idea to use those parameters as input for A/B test duration calculator. Here is an excellent tool from Abtasty. This calculator will give you the number of days required for running the test.
Use cookies to maintain the integrity of the test
Visitors who
participate in A/B testing should always see the same version of the page.
Consider the state of product
The procedure of A/B
testing varies depending on the state of a product. There are two typical
scenarios:
- When you
don’t have an established design and have several ideas about which direction
to take.
- When you
have an established design but want to try some new ideas out.
In the first case,
you want to treat all ideas equally, so you most likely assign equal weight
(traffic) to each solution. In the second case, you might want to give your new
page variants a smaller percentage of traffic than the existing solution (i.e.
60% traffic will go to the original design, while only 30% to the various)
because you want to mitigate the risk inherent with introducing new ideas.
Conduct A/B testing on a regular basis
The effectiveness of
anything can change over time and the results of A/B testing are not an
exception. Depending on the nature of your product, you might want to run tests
anywhere from a few days to a couple of weeks.