AdWords Experiment Tools (or ACE) have been around for a couple of years and (in my opinion) are one of the platform’s most under-used functions. They’ve finally been released out of beta, albeit with a 1,000-item cap (keywords/ad groups) on participation in the tools.
If you aren’t using them, you should be, specifically for ad text experiments.
Here’s how they’ll help:
1. You can control the impression split.
50/50 splits are pretty reliable if you use the rotate-evenly function of AdWords, but if you want any other split scenario, this is the only way to do it. Additionally, 50/50 is great if you only have two ads in a group, but if you have any other situation where you want to test (for instance) two variations against a control or test a variation against a control and leave one ad entirely out of it (but still running), you have this control with the experiments tool.
2. They have an end date.
You can control when the experiment stops to keep it to clear test iteration intervals or to be responsive to other client needs (such as a limited-time promo). Also, by setting start dates and end dates (see image above), you don’t have messy “half days” to note or contend with when pulling your data; it is clean from start to finish. Additionally, until you’ve officially “ended” the experiment, Google keeps a record of those dates, so if you lose your notes, you still know exactly when you ran the test.
3. There’s a handy Control/Experiment column.
Sometimes an ad experiment pits one literal ad text against another, but often you are testing some small snippet that doesn’t correlate to any given data point you can pivot the ad text on. For instance, say you want to test the exact same ad text across the whole account, but one version is all title case and the other isn’t with the ads themselves customized to the keywords. When you download your ads, you get the control and experiment columns and can pivot the title case vs. the un-title case across hundreds of ads to determine the performance deltas.
4. You can check segmenting of performance to date.
By choosing the experiment segment at the campaign level, you can get a quick snapshot in terms of how the entire experiment is performing. You may want to only report on it at a deep level weekly, but this tool allows you to see if there are huge performance deltas that indicate a test should be ended early or is negatively impacting overall performance.
5. You get statistical significance notifications.
Google actually tells you (by turning the segment arrows blue) when it determines that statistical significance has been reached on any given metric. If you typically run your tests through a t test or other stat sig analysis, this can save you a step.
6. Take push-button action.
If a test is successful, you can “push button” launch it as your new default at the campaign level. If it fails, you can end it with the push of a button, saving time at actually pausing or deleting the participating ads.
How do you use the experiment tools?