This is the subhead for the blog post

We recently ran tCPA (target CPA) bidding, a new “AdWords Smart Bidding” option, for a retail client looking to increase conversions while maintaining ROAS (sounds familiar to most of you, I’m sure).

We tested tCPA (target CPA bidding) in non-brand Alpha and Beta search campaigns to see what kind of effect bidding automation would have on sales and efficiency. We started with a 50/50 split of tCPA and manual bidding in each campaign experiment to determine whether automation was the right move for the account.

We saw mixed results between campaigns comparing tCPA to manual bidding; our Alpha campaigns doubled efficiency while the Beta efficiency was cut in half.

Below are the results of the U.S. non-brand Alpha campaign. As mentioned before, efficiency (as shown by conversion rate) doubled in the experiment, which started the week of 5/22:

This was an interesting test in that the campaign saw impression volume fluctuate week to week during and after the initial experiment; from this we presume that the algorithm had to make a lot of adjustments during the weeks we ran it. After two weeks, we saw enough of a boost in CVR to apply the experiment to the campaign on 6/1. In this case, we didn’t see volume increase; instead we saw efficiency double. The experiment was able to get the same amount of conversions for literally half the volume and cost.

This was a good result for the client at the time because our main KPI was ROAS.

The week of 6/12 saw a dip in CVR as we added new promotional copy to focus on a coming sale. The key callout is how the algorithm has worked to keep CVR trending slightly upward even through changes to the campaign.

Let’s look at when it didn’t work the way we wanted. Because of the positive results of the Alpha test, we decided to test tCPA in our Beta campaigns. The table below shows result of the test run 6/5 to 6/15:

In this case, the algorithm took the campaign and just ran with things. CPCs went up 80% over the control and never really backed down during the experiment. The result was CPA almost doubling, causing ROAS to be cut in half. In this case, the experiment more than doubled sales but at too great a cost. If the goal were to increase sales regardless of cost (wouldn’t that be nice?), the experiment would have been a success. But since efficiency was cut in half, we had to abandon the experiment. If we were to test this again, we would start with a lower tCPA than Google recommends and with a smaller split.

Conclusion

tCPA can work brilliantly, but it can also have a negative impact on your performance goals. Moral of the story, as usual: Test tCPA before you adopt it! In the case of the Alpha Experiment, it continues to work for us and is doing a great job of keeping the campaign close to performance targets.