How do you measure the impact of your Conversion Rate Optimization program? This actually falls cleanly into two phases: measuring a test and measuring/monitoring post-test results.
Let’s dive into what should happen in each phase.
Measuring a test
The most obvious way to measure the impact of your CRO program is to have a lot of winning tests!
In order to get those winning results, you need to make sure you are measuring your tests the right way. Winning tests are determined by a lift in your primary KPI (Key Performance Indicator, also known as goals or metrics).
Macro-conversion vs Micro-conversions: You can think of these as small conversions vs. big conversions. Macro-conversions typically represent the bottom of the funnel, getting the sale or the lead, whereas micro-conversions are the steps that lead up to the macro-conversion, such as clickthroughs or newsletter sign-ups.
Business KPIs vs Testing KPIs: When we determine what KPIs to measure against for our test, we have to pick from those macro- and micro-conversions. Of course, we always want to validate a test against our business KPIs, or macro conversions, such as the sale of our product. However, sometimes what we are testing is too far removed, or our hypothesis doesn’t align with that business KPI. Therefore you should always validate tests on your specific testing KPI as your main metric, but pay attention to your business KPIs as a secondary metric.
A previous client of mine is an eCommerce company that also offers a service. This meant they got a lot of people visiting their homepage as opposed to just their product pages or landing pages. Not everyone who visited the homepage needed to buy something, and the funnel process was LONG. The goal was to optimize the homepage, but how could we know if we were successful when none of our tests could ever be validated on their business KPI or macro conversion: the product sale? The answer was to establish a homepage testing KPI, which happened to be a click-through micro-conversion. The result was that we could see that users got one step further in the funnel and there was increased traffic to those pages, which allowed us to test moving visitors even further through the funnel despite being unable to validate on sales.
Post-test measurement and monitoring
Don’t let it stop there; make sure your team is engaging in post-test measurement to measure the true impact of results. Post-test measurement and monitoring may be the most important piece, though it’s often overlooked.
Don’t just roll out your winners and move on to the next test or page. Continue monitoring or testing your pages to make sure that your CRO changes are holding up and not affected by some unknown validity threat. Many times, companies will forecast off of test results, which is a reasonable short-term activity. However, it’s crucial that you continue to monitor the results of your tests, even as variations are pushed live, in order to truly measure and forecast on impact.
One client I’ve worked with got significant lifts with a new variation of their homepage. The new variation went to 100% of the traffic. However, shortly after the new variation launched to all traffic, we saw conversion rates drop dramatically. Oh no! Did we screw up royally with our testing? Were our results valid?
After some significant digging, it turned out that a well-known competitor recently started offering the same product with free shipping and for a dollar less.
We updated our variation accordingly and saw our conversion rates spike right back up to testing levels.
This is a prime example of why you should monitor your results even after the test is over, and an even better reason to ABT (always be testing!). In order to do this, you must make sure your team is following a methodical process and developing roadmaps for testing.
Unfortunately all tests can’t be winners, so it’s important for a CRO program to be able to gain insights from all tests (losing and winning).
Remember, when you’re looking to measure the impact of your CRO program, make sure to pay attention to the following:
- Are we measuring our individual tests correctly?
- Are we continually testing and monitoring our results even after the test is over?
- Do we have a program in place that allows us to test methodically and gain insights from all of our tests?