This is the subhead for the blog post
At 3Q Digital, we’re always pushing the innovation envelope; we believe this is an essential component of success in an industry that changes so frequently. This post is part of our inaugural Innovation Week, where we showcase all manner of innovations that have improved results for our clients and teammates.
There’s a difference between using ad copy to learn what messaging and imagery works best for future application, and testing to drive the best possible performance and to continue building on that performance. In an industry that seems to live by the motto “always be testing,” we see that often account managers lose sight of why they are testing. Let’s think outside the box and reconsider our ad copy testing strategy instead of following the herd.
Here are some questions you may have forgotten to ask yourself before starting your test – and some questions that you should ask to stay focused on growth.
What elements are required for a valid traditional A/B test to be useful?
One huge downside of testing in AdWords is that ad history impacts ad serving and quality score. As a result, running an accurate ad copy test to give equal chance to the new ads requires that we duplicate legacy ads and restart them to refresh their ad history. We’re knowingly hurting our performance for the sake of the test in these instances. Have you considered that those adverse effects may, in the long run, outweigh benefits gained by testing?
Will the test results be usable in future ads?
Performing strict copy testing during holiday may seem like a great idea to squeeze out the best performance from a key time period, but how will what you learn from that strict test be useful for you in January, or in the next holiday season? If you know that copy will not be usable in the future, why force equal rotation of ads and hurt your performance in the short term? Instead, consider setting your ad rotation to optimize for conversions from the start. You can continue rotating in new ad copy, but let Google determine which ads are most effective for each individual user. Strict A/B split testing is only a good use of time if you’re able to apply the findings in the future and build on those successes. If you can’t build on what you learn, then you’re just wasting impressions with non-ideal ads that could be auto-optimized.
Are there other factors that will render the test meaningless?
When testing in Facebook, it’s never possible to isolate ad copy elements. Since each ad in Facebook has a life of its own as a result of its comments, likes, and shares, we necessarily have a confounding factor in all Facebook ad tests. Whether you opt for pre-post test or simultaneous testing, you’ll always be confronted with the difference these extensions have on your results. If we can’t control elements of the ad, why are we trying to apply Facebook ad copy testing results to future ad copy creation?
Because of Facebook’s unique platform, we’re never really able to test. Instead of trying to determine what specific messaging or imagery works in Facebook, I’d recommend instead focusing on how we can leverage what we know about Facebook’s serving and cost algorithms to try whatever we can to game the system. Test refreshing ads every week to reset relevance. Test rotating in new variations every few days to see how your CPCs are impacted. There are many ways to test in innovative ways that are more meaningful and much more likely to drive improved bottom-line results.
These are just a few questions to consider when developing an ad copy testing plan. Maybe after you consider these questions, you’ll focus on learning how to improve performance rather than learning what phrasing is statistically better than other phrasing.