How to use A/B Testing to optimise your site for conversions
Take a look at the reasons why you should consider A/B Testing and how to use it to optimise your site for conversions.
Successful digital experiences are fine-tuned over time, much like your favourite family recipe that’s been passed through the generations. To this end, a popular and effective tactic to get closer to achieving the goals of your business and your users is baking A/B testing into your Conversion Rate Optimisation (CRO) strategy.
At Yoyo, we’re in the business of designing and developing impactful digital experiences but we’re not strangers to the fact that polished, aesthetically-pleasing designs are only at least half the story nowadays. Whilst our User Experience (UX) Designers are user-obsessed and our Designers are experts in aesthetics, the reality is that exceptional experiences are built iteratively with data to provide rationale for design decisions. When we say designing and developing a website is only half the story; what we mean is that launching a website provides the perfect opportunity to gather a wealth of qualitative and quantitative data which can be used to optimise the experience. Indeed, data can be leveraged to benchmark, make recommendations and test fresh designs; whether it be alleviating specific pain points, improving the bounce rate of a particular page, or increasing conversions.
There are a variety of methods that can be used to generate meaningful data that can be analysed to provide such recommendations, but our choice of poison is A/B testing. The following will delve into what exactly A/B Testing is, why you should do it, and how it’s done; as well as some top tips along the way.
Hit the big red or blue button on A/B testing
As suggested above, there are a host of methods available to test and analyse changes to your website: Split URL testing, Multivariate Testing (MVT), or Multi-page testing. Whilst all similar and great methods in their own right, we’re going to focus on A/B testing.
To keep it really really simple, A/B Testing is a comparison between two variations of the same thing to determine which performs best. Variant A is your control (original) whilst Variant B refers to the instance you’ve made changes to (new). Cool, job done? Not quite. The key difference between the aforementioned alternatives is that A/B Testing focuses on comparing one individual element at a time. From Call-to-action button colours to small changes in copy, A/B Testing allows you to accurately determine which element of your page contributed to the differences in your data. Here are a few examples of the types of elements you might test:
- Button colours
- Call-to-action copy
- Body copy
- Media, e.g. images
Now, whilst you can indeed test all of the above it’s important that you consider aspects of your site that are related to both your business’ and users’ goals - do you want to see an uplift in conversions, increase users’ scroll depth, or improve bounce rate? Defining your goals, enables you to track performance against relevant metrics and focus on specific elements to make incremental, data-driven changes to.
What’s more is, when A/B Testing is employed as part of wider CRO strategy, which might take into account your internal priorities and qualitative user feedback for example, there is greater potential for success over an extended period of time. After all, your audience’s needs and trends change day-to-day so an ‘always on’, test and learn approach to A/B Testing allows you to evolve with the times and continuously optimise your site.
You've made it this far, now you want the 'how'
Here’s the tasty bit - how to conduct an A/B test. We’ve broken the process down into a few steps which outline how to create a comprehensive plan for A/B Testing.
Identify problem areas
You’ve likely already got access to a host of data, or more than you might think - use it to inform where to focus your tests. Dig into this goldmine and take a look at your site to identify opportunities for optimisation. Keep your business and user needs in mind in order to align your efforts with your overall goals and objectives. Begin by asking which pages have the most views and are crucial to the success of your website; or, where does the bounce rate indicate gaps in the user experience.
If you don’t have this data to hand, a Usability Review or User Testing are great methods to elicit insights into your current experience and direct your A/B Testing efforts.
Ok, so you’ve made a data-driven and goal-oriented decision about where you want to focus. Now you need hypotheses to inform which elements you want to test and the metrics you will use to measure performance and help make design recommendations off the back of your experiments.
A hypothesis is a prediction you make before running a test and help you identify what exactly you’re looking to learn from it. Your hypotheses should consist of a variable, a result and rationale - they might look like the following:
- If we change the colour of the newsletter call-to-action button from blue to red we will see an uplift in conversions because it is will be more prominent to users
- If we move our showreel video on the homepage above the fold we will see an increase in session duration because interactive media engages users for longer
- If we include more emotive copy in the introductory paragraph we will increase the average scroll depth because users will develop a connection with the content and want to read more
The A, the B, the tomayto, the tomahto; variations refer to the changes you make to test against your hypotheses. Make sure to limit your changes to one per test so that you can identify what is contributing to differences in the performance of variant A and B. This makes it easier to make conclusions and the recommendations you make later on. Take a look at the ‘variables’ within the hypotheses above to get an idea of the type of variations you might test and don’t forget to keep key Usability Principles in mind when choosing your variants.
You’re getting ready to hit start and run your test, but you're questioning how long to run your test for. Well, meet our new catchphrase - ‘It depends!’ This is because every website is different, especially in terms of the amount of traffic they get. To this end, duration matters much less than statistical significance. If your site sees a substantial amount of traffic within a small period of time, you might collect enough data to establish statistical significance. In other words, enough data to trust that it is representative of your audience in general. But remember it works the other way around too!
The results are in and last but by all means not least it’s time for some analysis. Which variant has performed best? Well, luckily we can simply return to our initial hypothesis to find out if it has been proved or disproved. Whichever side the results fall, there is an opportunity to learn and iterate on past tests. Whether you identify the variant you want to implement going forward or you need to rethink your hypothesis, this is the learning part of a ‘test and learn’ strategy and it’s all part of the fun. Look back at the decisions you’ve made whilst planning your A/B test and determine can you glean from the test results to inform future iterations to continue to optimise your site time after time.
We know that A/B Testing is not a solve-all for every digital experience out there, but it sure is a simple, cost-efficient and evidence-based method to gain a quick understanding of a challenge or question you have. It enables you to test your assumptions and make impactful decisions which ultimately improve the user experience, positively impact your bottom line and elevate your business above the competition.
Drop us a line at email@example.com and speak to a member of the team to find out how we can help you optimise your site for conversions.