A/B testing techniques from the experts

Have you ever wondered how changing one tiny bit of element in your website can possibly give you more conversions? Would changing the signup button’s colour from blue to green give you more signups? Or perhaps putting a “Filter” label on your e-Commerce site’s filter box can prevent dropouts (people leaving the page because they can’t find what they’re looking for).

This is what A/B testing is all about.

What is A/B testing?

Also known as split testing, A/B testing is defined as: [1]

… a process of showing two variants of the same web page to different segments of website visitors at the same time and comparing which variant drives more conversions.

Here’s how it works:

Think about how your website visitors represent every opportunity to turn them into paying customers, email subscribers, or form-fillers.

But these turning into reality all depends on how user-friendly your website is. In a growth marketer’s world, this means that the more optimised your website is, the higher the conversion rate.

To find out if your website is optimised and how to make it better, you’ll need a data-based and systematic approach to pinpoint the elements that are likely to give you the most bang for buck.

An A/B testing involves comparing an existing element in your website with a “challenger.” Whichever yields a better result is considered the “winner.”

You will need to pick an element in your website to tweak (read: experiment) and run an A/B test on to finally determine the winner.

That’s the simple way of explaining a split test.

But it’s not as easy as randomly picking an experiment. Read more to know how experts do it. 👇

Why you should A/B test

A/B tests are incredibly important if you’re not seeing as many leads as you’d want to. If you’re an e-Commerce website, you’ll need to perform a split test if you’re seeing high cart abandonment (or people leaving the cart without completing the payment). If you’re a content website or a blogger, you’d want to know why you have such low engagement or short time on page.

Split tests can help you find out the reasons why. In an optimizer’s world, these low tractions are called “leaks in the conversion funnel,” as if money is leaking somewhere you don’t know.

Other benefits of an A/B test include the following:

  • Get to know more about your website visitors by knowing their pain points… and actually being able to solve them. When you compare an existing element with the “challenger,” knowing the “winner” will help not just you but also your visitor find what they’re looking for in your website. A split test can show you that the “add to cart” button, for instance, is not that visible in the page. So, you’d want to make it more obvious, and the test can tell you how. 
  • You will get higher ROI from your existing audience. Leverage on return visitors and make sure you give them the best website experience. In turn, you will experience better ROI from this group compared to newly acquired ones. Studies show that: [2]
    • Acquiring a new customer can cost five times more than retaining an existing customer.
    • Increasing customer retention by 5% can increase profits from 25-95%.
    • The success rate of selling to a customer you already have is 60-70%, while the success rate of selling to a new customer is 5-20%.
  • Change things up for the better without making a massive effort. You don’t need to hire a web designer to conduct an A/B test. Maybe you just need to tweak how your product description is written, or change the colour of a button, put it somewhere else on a page, or try removing the video from your home page. By making minimal changes, you can make significant improvements that impact your revenue.

Here’s how experts conduct an A/B test

If you’re still here then you’re likely interested to know how experts perform the A/B test.

Okay – I am by NO MEANS an expert. But I’ve learned heaps from an expert optimizer and would love to share my key learnings.

The following frameworks and methodologies are from Ton Wesseling in his A/B Testing Mastery course in CXL.

Here’s a nice map of how to go about the A/B test:

In this article, I’ll cover what’s in PLANNING.

Do you have enough data? The ROAR rule of thumb model

ROAR stands for the four Optimization phases: Risk, Optimization, Automation and Re-think.

The ROAR model presents a framework for how many conversions per month you need and the % impact needed depending on which phase you are in optimization.

Or more simply, it tells you when you can run how many experiments.

See below:

This means that if you have a low-traffic website that has less than 1,000 conversions per month, then you may not be ready to conduct a full-on A/B test just yet.

Note: I did not say you *can’t* run an A/B test. You can – but it will be hard to find a winner because the data (or the sample size) is still too low. Maybe that’s the reason why it’s called RISK – you can run your experiment based on what you think is the problem, but you can’t really be sure if that is indeed the case based on results.

What I tagged as “full-on A/B test” means that having over 1,000 conversions but less than 10,000 in a month will allow you to go through the process that will be discussed a little further below.

But the line between RISK and OPTIMIZATION tells us that you can run an A/B test with over 1,000 conversions. What you need to look out for? The challenger has to show a 15% improvement – or uplift – from status quo.

For instance, if your status quo sign up button (say a gray one) is giving you 100 sign ups, your challenger sign up button (maybe it’s orange this time) should bring you 115 sign ups.

You can perform more and more tests as you near 10,000 conversions.

When you reach the red line – 10,000 conversions – you can start performing 4 A/B tests a week, or 200 a year. You may need to hire a full team of A/B testers or optimizers at this point, because a single person won’t be able to run the wheel alone. Most importantly, you will want to automate this test even with a bigger team. Hence the AUTOMATE phase.

At 10,000 conversions or more, you should see an uplift of at least 5%.

You can continue to run this test in weeks (to find out, you may use a calculator like this). As the number of conversions plateau, have a think and REVISIT your test to see if you can already declare a winner.

Know your KPI

Or key performance indicator. So when conducting A/B tests, what should you measure?

Many marketers may be chasing after in-website clicks. But in the whole scheme of optimization things, clicks mean nothing. They don’t bring the moolah $$$. So what this image tells us is that Potential Lifetime Value is the bucket of gold.

“Life Time Value or LTV is an estimate of the average revenue that a customer will generate throughout their lifespan as a customer. This ‘worth’ of a customer can help determine many economic decisions for a company including marketing budget, resources, profitability and forecasting.” [3]

Potential Lifetime Value should be the aspiration, the target goal for any split test.

Depending on your website’s traffic, Potential Lifetime Value may or may not be achievable. Hence, it’s okay to settle with revenue per user, transactions or behavior for now.

The gist is, clicks are not enough in terms of measuring results of your A/B test.

Research and obtain insights: The 6V model

Before you set up your A/B test, you need to get more insights about your visitors, competitors and your website.

The 6V model is a great guide for navigating the whole research and insights-gathering process. The 6 V’s refer to the following:

  • Value. Know the company. What is the mission, vision, and value? The strategies? Short- and long-term goals? Product and business KPIs? What is the product or the business all about?
  • Versus. Did you know you can actually listen to what your competitors are doing in terms of A/B tests? Obviously, you can look at their website. But you can also find out where you overlap in the audience, the A/B tests they currently perform, the tools they are using, and track the changes in their website. (If you want to know how, send me a DM.)
  • View. What do web analytics say? What are the behaviours around the landing pages? Where does your traffic come from? What are your visitors’ journey within your website? Behavioural and demographic segments? What do the heat maps and mouse scrolls say?
  • Voice. What does your customer support team, chat logs and history of questions all say? What are the comments on social media? Conduct user research through customer interviews, focus group discussions and usability studies.  
  • Verified. What do scientific literature and psychology studies say? They normally feed decision-making.
  • Validated. What have you tested previously? What are the insights from past tests? Any validated tests?

Hope you enjoyed this article!

This piece is all but scratching the surface of what A/B testing is all about. There still other aspects to cover like the Execution and Results phases in the entire process. Watch this space for articles on this soon!

In the meantime, if you have any questions or would like to discuss this further, feel free to leave a comment below. You can also email me@tinasendin.com or connect on LinkedIn.


Read more growth marketing articles here

About the Author

Tina Sendin is a full-stack marketer with over 10 years of marketing and business development savvy driving results for startups, SMEs and multinationals. This is her space for sharing trends, insights, hacks, and updates on growth marketing and conversion optimization.

Leave Comment

Your email address will not be published. Required fields are marked *