For quite some time, the conventional wisdom for creative testing on Facebook has been “it can’t be done.” If you wanted at least some knowledge on whether one ad was better than another, you’d set them up in the same ad set and let the algorithm determine a winner or two, which would usually happen within the first 24-48 hours. The same four ads could show different winners depending on when you launched, but once the winner was found, those poor un-chosen ads would never see delivery light of day. Facebook marketers accepted this as the law of the land.

But what if there’s another way?

The recent roll-out of Facebook’s Split Testing at the ad set level is a powerful tool, but it can only be used for testing bidding and audiences at this time (Patience, young Padawan. I can only imagine creative testing is number one on the list of requests Facebook receives from the likes of us). So today I’ll focus on something I recently tried: a one-ad-per-ad-set test within the same campaign (I can hear you gasping now) to “force” two ads to get an equal amount of spend over the same period of time.

The Test

We wanted to know if one messaging variation (using the same creative and the same audience targeting) would yield different results within the same campaign. So we created our own creative test by making two identical ad sets with the only difference being the individual ad copy.

A few caveats:

  • The test only ran for 3 weeks due to unforeseen issues; we would like to test this again to gather more data.
  • Message A was a more evergreen version, while Message B communicated a sense of urgency.
  • Our audience was a retargeting email list; this could perform differently given a more acquisition-focused list.

The Verdict

This is by no means a statistically valid test. It’s a somewhat “hacky” way to get creative learnings, so test at your own risk.

Both ad sets received the same amount of delivery and pretty similar Reach (vs. one dropping off after 48 hours); however, ad set B generated almost double the clicks and had an overall lower CPA. We could then use this to prove to the client that the more “urgent” copy was likely to be more successful going forward than the evergreen.

Again I’d like to continue to test this to determine validity, but this initial test shows that it is not the kiss of death to test two ad sets with the same audience within a campaign.

If you’ve tested a different version of this, let us know in the comments!

Leave a Comment

Lindsey Wallem
Having worked in the social media and digital space since 2007, Lindsey enjoys providing clients with the in-depth reporting and measurable results that paid social media provides. A graduate of DePaul University, she loves the city of Chicago, including the oh-so-mediocre Chicago Bears. When not analyzing paid Facebook campaigns, she does yoga, collects vinyl records, and continues her search for the perfect falafel sandwich.