This is the subhead for the blog post
After working in the CRO field for the last 5 years, I’ve been able to work with some great teams. Often, though, I find that there is one vital piece missing in their CRO programs. Maybe they are doing amazing research but have no QA process. Or they have great testing hypotheses but no way to know which they should test first! Because of this, I’ve put together a quick checklist to make sure that you aren’t forgetting any crucial elements to your CRO program.
I’ve provided a brief summary of the steps below. You can find the full, expanded descriptions in our Beginner’s Guide to CRO A/B Testing white paper.
The most important part of running an A/B test or CRO program is making sure you’ve done your research. Without that, you are just guessing at what the conversion barriers are.
2. Experiment Goals:
You need to identify your testing goals. These are the key performance indicators (KPIs) by which you will measure the success of your testing program.
Ensure you have a fully developed testing hypothesis. Utilize your research and goals to determine what changes you can make to impact your desired goals.
4. Test Duration and Validation:
The next step is making sure that your test can validate in a reasonable amount of time. There are two factors that can help contribute to a test validating:
- Valid sample size (enough traffic)
- High conversion impact (large detectable effect)
Once you are confident you can run a valid test, you need to prioritize hypotheses. A simple way to prioritize is based on level of effort and impact. Estimate how much effort it will take to build the test and what kind of impact if can have on your KPIs.
(If you need help with your program, we use the 3Q Prioritization Framework, which is a proprietary process specifically developed to help rank tests at a more exact level than simply taking into account LoE and Impact. Contact us to learn more!)
6. Test Setup and QA:
Next, you’ll start building your tests. Depending on which testing tool you use, you can usually find online resources for building your test. Once the test has been built, you want to make sure that you and your team QA the test before it is launched live to your site visitors.
7. Analyze results:
There are three ways this can go. You will either end up with a winning test, a losing test, or inconclusive results:
- Winning test: If you have a solid hypothesis, and your goals validated a lift, you are set to move forward with your variation as a new control. Use this win to help guide your roadmap for follow-up testing opportunities.
- Losing Test: If the tested variation validated a decrease in your KPI, don’t just chalk it up to being a bad idea and move on to the next one. Dig into the results to see what insights can be gleaned from the test. Losing tests can guide your customer theory for what does and doesn’t work for your visitors.
- Inconclusive Results: Many times, inconclusive results can be the result of an insufficient sample size. However, there are times when there is just not enough of a difference between the control and variation to reach any level of confidence on performance. Do your best NOT to make business decisions on these results. Unless you have an analytics team at your disposal to dig into the statistics of the result, you will not want to claim “directional” improvement. Update your roadmap to include follow-up testing on higher-traffic pages, or with variations that have the potential for greater impact.
Now that you have the basics of A/B testing in mind, you’re ready to take your marketing to the next level with a successful CRO program. If you have questions contact us anytime, or check out our “6 Foundational Principles of Optimization and Growth” white paper for a deep dive on how to get started. Good luck!