UPDATE: To learn more about exclusionary best practices, check out our 2017 Complete Guide to Facebook Advertising

.

When you are running a lot of different audiences at a time for your advertising, using the right exclusions to avoid audience overlap is a high priority. At 3Q we have an exclusionary pyramid that we follow, which is essentially nested lookalikes.

How does this work?

  • Custom audiences will always be excluded from all other audiences since these are your smallest audiences
  • Your small-percentage lookalikes will always be excluded from the larger-percentage lookalike audiences
  • Your lookalike audiences will most of the time be excluded from your behavioral/interest based audiences as these are generally your largest audience
  • If you are running two different 1% lookalikes, for example, you’ll want to base the exclusions on performance or size. For example, you have a high-LTV LAL 1% and a Facebook purchasers LAL 1%. You generally see that the high-LTV LAL 1% has a better CPA so you would want to exclude this audience from your Facebook Purchasers LAL 1%

Let’s say you are audience testing. Generally, if you want to have a true A/B test, you will want to use split testing to test the audiences to give each one a fair chance at performing well. If you are running campaigns in the United States, this will work. However if you are running campaigns in Canada for example, you do not have the luxury of having large enough audiences to use split testing while still being able to utilize oCPM bidding at its finest.

What a test looks like

We were testing a new feature of lookalikes (weighted lookalikes) for a financial technology client in Canada. Since this was a new Facebook feature in beta, we set it up as a clear A/B test with Facebook’s help splitting the audiences on the back end. We ran into a lot of issues:

  1. Because this was Canada, we had to start with a 5% lookalike to begin with in order for it to be large enough for oCPM bidding
  2. Even at a 5% LAL, audiences were under 1MM in size, and splitting this made the audience even smaller
  3. Both audiences struggled to spend with the new audience dropping to $0 for a few days and CPA being more than double for both audiences than what we usually see

After spending a good amount of money on the test, we ended it removing the split and returning both audiences to their usual exclusions. We wanted to really try to get this new test audience to work so we continued running it, excluding our control from it in order to allow for as much traffic going to our control audience since we knew this was our top performer.

Performance started to pick up this way for both audiences; however, the still-smaller test audience struggled to spend and get volume so we decided to run another test (3Q just loves running tests!)!

Our New Test: Switch the exclusions of the test and control audiences. This means we switched from having the test excluded from the control to having the control excluded from the test. This allowed for the test audience size to increase in size allowing oCPM bidding to optimize better and allow for increased volume.

Our Prediction/Expectation: We predicted/expected that the test audience would now be able to spend more, increasing conversion volume and decreasing CPA while the control audience would struggle to spend and have an increased CPA.

Our Next Steps: The outcome of this test was what we predicted, so our next step is to look at LTV for the two audiences to determine which is higher and, based on this, set up exclusions accordingly.

Moral of the story? You should always be testing on Facebook – and if you run into an issue when using the regular, recommended set-up for the test, try improvising and see if it helps!

 

Leave a Comment

Molly Parker
After graduating with a business degree from Skidmore College in 2012, Molly gained marketing experience working for a high tech start up company in Tel Aviv, Israel. When she's not working, you can find her at the barn, horseback riding.