This is the subhead for the blog post

When it comes to effectively managing Facebook ad campaigns, you must run A/B tests.

An A/B test allows you to run two (or more) ads simultaneously to the same audience in real time to determine which one produces the best results. You can elect to test two very different ads against each other, or two ads that have just one variable changed. I usually recommend that Facebook ad managers test just one variable so that you know exactly what caused an increase in engagement and conversion.

I know all of this can be a little bit confusing, so let’s look at a real-world example.

In this Facebook ad campaign, I am A/B testing the following ads against each other.

Ad A:

image001Ad B:


The only variable I have changed in each of these ads is the main image. The copy, headlines, calls-to-action, buttons, audience targeting, and everything else has remained exactly the same.

I’ve been running this campaign for about a week now. According to my ad dashboard, Facebook Ad B would seem like the winner. It has received 88 website clicks at $3.52 per click, whereas Ad A has only received 42 website clicks at $3.30 per click.

Based on this information alone, a Facebook ad manager might think Ad A is the winner; I should turn off Ad B since it is receiving half the number of clicks at roughly the same spend. This isn’t wrong to think; however, I really like to dig deeper to understand more than the click-through metrics but the conversion metrics as well.

I linked both of these ads to the same landing page; however, I added unique tracking IDs at the end of each URL so that I could see in Google Analytics which clicks came from Ad A vs. Ad B.

You can also set up a conversion tracking pixel in Facebook, which is a good idea because it allows you to optimize Facebook ad spend based on conversions rather than clicks. However, I still recommend placing tracking parameters for Google Analytics because GA can give you deeper insight into on-site stats like Time On Site, Bounce Rate, Pages Visited, and so on. While overall conversion rate is most important, being able to review these other stats can be helpful.

I use Google’s URL Builder to create my trackable links. Here’s how I set up my URLs (you can use whatever nomenclature works best for you):

Source = FB

Medium = Ad

Campaign = FDICRestaurantsA, FDICRestaurantsB

So here is what one of my final URLs would look like:

Naturally, I use the URL with campaign name “FDICRestaurantsA” in the ad that I call FDIC Restaurants A. Naming the ads and the URLs the same things helps me stay organized and understand which results match up to which ad quickly when I’m reviewing data in Google Analytics.

Now, because I created these trackable URLs, I don’t need to rely solely on the information Facebook’s dashboard gives me to determine which ad performed best in my A/B test. Instead I can dive into the Campaigns section of Google Analytics (located under Acquisition) for more info.


Inside Google Analytics, I can see that FDICRestaurantsA has a conversion rate of 12.20%, whereas FDICRestaurantsB has a conversion rate of 9.89%.

So, based on the click-through data, Ad B looks like the winner, but based on the conversion data right now, Ad A looks like the winner.

What action can you take when you have conflicting data like this? Run the A/B test longer!

This campaign really hasn’t received enough clicks or conversions one way or another for me to know definitively which ad is the winner. However, by A/B testing my Facebook ad campaign, I am able to learn a lot more about what motivates my audience to click and convert than if I just ran one ad.

If the data continues to conflict as the test runs its course, 99% of the time I am going to continue running the ad that converts the best and stop the ad the other ad. More conversions are almost always better than more clicks! However, the money on the underperforming ad was well spent because I now know something new about my audience: what they respond to best.

When an A/B test is completed, it’s not over; it’s simply time to start a new A/B test to further optimize your campaign.

Take what you learned from the first test, and apply it to your new ad. For example, if you saw that the image on the losing ad actually led to more clicks (just less conversions), you might try using that image again in your next test but with some different copy, a different headline, or a different call-to-action. It could be that in your next test this image with new copy outperforms your current winner!

Remember, in the world of Facebook advertising, you are never done optimizing! There is always something new you can be testing or trying to better engage your audience and get them to convert.