Optimization is about finding the problems and conducting experiments to solve them. It’s a discovery of what matters – what kind of pain point we need to solve in the context of the user’s experience with a website. Conversion optimization is finding problems in the website and resolving them through tests. It involves two critical things – research and testing.
Conversion optimization is where marketing meets science and statistics. It’s so much more powerful than traditional marketing because the concepts and strategies that used to be plain creativity and design-focused, are now grounded on research and data.
And this is the role research and testing play in conversion optimization. They tell us what the problem is (or if there is a problem), feed us with data, then guide our next move in optimizing the website for conversion. In layman’s terms, research and testing help us make our website a more smooth-flowing experience for our visitors such that they take the actions we want them to take, i.e. purchase, subscribing to emails, etc.
Conversion optimization is also grounded on problem-solving. We test for the sake of solving a problem, and not because we think that something is a “nice idea” or “good feature to have” incorporated into our website.
Research and testing
Solving the problem requires two things – research (to identify the problem) and testing (to find out how to solve the problem):
Identifying the problem through the ResearchXL Framework
In the context of website conversions, there are four different kinds of issues:
- Instrumentation – when something isn’t being measured or being incorrectly measured
- Test/hypothesis – when you know the problem but don’t know the solution
- Investigate – when you’re not sure whether it is an issue or not, or perhaps a problem does not exist
- Just do it – when an issue’s solution is so obvious and can easily be remedied, you don’t need to test it at all
In identifying the problem, Peep describes the ResearchXL Framework, which is composed of 6 steps:
- Heuristic analysis – understanding users’ experience on your website. Involves knowing whether your key messages are clear, what’s holding users back from taking action, any anxiety-inducing or distracting elements for each page of the website.
- Technical analysis – making sure that the website works on various devices and browsers. Involves quality assurance tests like cross-browser testing, cross-device testing, and speed analysis.
- Digital analytics – identifying drop-off points and analyzing what leads users to make your desired outcomes/actions.
- Qualitative research – involves the roll-out of surveys, whether on-site polls to find out where the large drop-out points are and why, or follow-up polls for those you have taken action on your site, i.e. purchase, subscription.
- User testing – assigning users a specific task and finding out how they go through it on your website, observing their behavior and interaction within and across pages.
- Mouse-tracking analysis – similar to user testing, this involves watching where users go or click, and how far down they scroll using click maps, heat maps, and scroll maps
(Want to know the specifics of this framework, including tools to use and how to go about each step? Send me an email or leave a comment below.)
Finding solutions through testing
In conversion optimization, you need data. Set aside any gut feeling, opinions, or biases. The only thing that matters is what your tests are telling you.
Guessing the solution doesn’t solve the problem. Peep makes this incredible metaphor about a patient consulting a doctor for a health problem, only for the doctor to recommend a surgery before knowing what the actual issue is. This doesn’t make sense at all, right?
But unfortunately, many “optimizers” do exactly this. They look at a website and tend to jump to conclusions based on their experience, best practices, trends, industry standards, and perhaps personal preference.
But identifying solutions require data, devoid of any subjectivity or bias. Hence tests are necessary to find out appropriate actions. (Will cover the various types of tests in my next article.)
The PXL Prioritization Framework
To find potential solutions for the problems identified from the ResearchXL framework, we need to test. But tests may present hundreds of problems, so prioritization is key.
Of the hundreds of problems we can potentially find from our research, how do we know which ones to solve first?
This is where the PXL Prioritization Framework enters the picture. The framework maps out the issue, the kind of issue mentioned above (which of the four is it?), background and context on the issue, action required, and a rating. (The rating goes from 1 to 5 with 1 being a minor usability issue and 5 being a severe one.)
By using this framework and mapping out said elements, we can eliminate any subjectivity and rely on a more quantitative approach to determining which problems to tackle first.
We can’t do tests for several problems simultaneously, unless your company has thousands of people to do it. So the PXL Prioritization Framework helps us pick the ones that would give us the most business impact.
(Want to know how the PXL Prioritization Framework table looks like? Send me an email or leave a comment below.)
Once you know which issues to test first, the key is then to:
- Have enough sample size – use a sample size calculator to determine this (usually requires a baseline conversion rate and a minimum detectable effect or uplift %)
- Implement in multiple business cycles – complete at least two weeks and at most 28 days, always completing a cycle of a week and never in between
After satisfying these two and reaching a statistical significance, then you know that you can stop your test/experiment and glean insights from the data available.
There are many ways to go through tests, and I will tackle this in further detail over the next few articles. But here are some ways of how NOT to go about your testing.
12 common testing mistakes
- Going through tests that are not necessary.
- Making too many assumptions. Some experienced marketers think they know what will work, in the absence of data to support the assumptions.
- Copying best practices and other people’s tests. Different websites have different problems, thus different solutions.
- The sample size is too low. 100 is too low. If you don’t have enough sample size yet, just keep identifying problems and making changes, until you have enough traction/traffic to finally roll out a test.
- Running tests on pages with very little traffic
- Running tests not long enough. You can only reduce the duration of the test if you have a high volume site (say 10,000 transactions monthly)
- Not testing for a full week at a time
- Not having a second opinion on what the tools are telling you, and test data are not sent to third party analytics. Tools can give you the data that you need. But it’s up to you to interpret them and decide whether they are valid or not. Effective optimizers are skeptical of good results or wins. They make sure to double-check with a third party tool.
- Giving up after the first test for hypothesis fails. Tests are more fails than wins. The key to conversion optimization is continual testing.
- Not being aware of validity threats, aka false positive or negative
- Ignoring small gains – even a small percentage improvement monthly will mean a compounded rate for the year
- Not running tests all the time. Conversion optimization and growth marketing are all about continuous improvement, nonstop problem-solving, and testing.
Did you enjoy reading this article or have questions on any element of conversion optimization? I’d be more than happy to answer your q’s and have a walkthrough of the specifics with you, including the tools, frameworks, and step-by-step.
Email me@tinasendin.com or leave a comment below. Would also love for us to connect on LinkedIn!
Credits to CXL Institute’s Growth Marketing Minidegree for the golden takeaways from this piece.