Why we always test: a DSA story
Published: October 20, 2016
Author: Michael Shore
Testing is the staple of any well-managed PPC campaign. Sometimes you’re able to discover something that works that you never thought would. Case in point, we had been running a Dynamic Search Ads (DSA) campaign for a retail client for a couple of years, and testing made a big difference in our optimization strategy.
When we initially created the campaign back in early 2014, we were running only one “catch all” ad group targeting all pages on the website. Later on, we figured it would be better for performance to segment the campaign by narrowing our targeting criteria (e.g. “dresses”, “tops”, etc.) and assigning each into its own ad group, but keeping the “catch all” ad group to act as a safety net in case any queries fell through the cracks. This would allow for better data segmentation and, more importantly, allow us to better tailor our creative to each target category. Initially, we saw great performance from the new segmentation, with overall conversions rising and ROAS improving.
A few months later, we noticed performance start to dip. Traffic and conversion volume started to drop off. At first we thought it may have been a new competitor or even seasonality taking effect. When performance didn’t recover, we pulled some optimization levers to try and stabilize performance, but nothing helped. We then discussed reverting back to the old structure with the single “catch all” ad group. Surely this couldn’t help matters, right? I mean, sure, it worked a couple of years ago, but the hyper-segmentation allowed us to maximize exposure and traffic for each of our categories. Not to mention, we still had the old “catch all” as a safety net, so it’s not like we were dark in any areas. How could going back to that old archaic structure help? But we had exhausted our other options, so what did we have to lose?
We decided to run an experiment utilizing the recently released campaign drafts and experiments AdWords feature, where we set up an experiment and ran only the single ad group against the segmented structure. The test ran for almost three months. In the single ad group structure, we were able to capture significantly more traffic and lower CPCs, which led to an overall improvement in conversions and revenue. We were finally starting to see traffic and orders improve after months of consistent declines!
As a result, we paused the segmented ad groups and only ran the single ad group (the original one from 2 years ago!). The results:
This goes to show that even though you’ve implemented successful changes from past tests, in time you may end up re-testing and reverting back to the original! Moral of the story: Always be testing and never be afraid to re-test something; it may improve performance after all!