This is the subhead for the blog post

Creative A/B testing is a method of comparing two versions of a creative and measuring the performance of each, all other things being constant. Think of it as an experiment; you’re letting the data and performance guide you as to which creative performs better. From there you can expand on the ‘winning’ creative and conduct another creative A/B test with a new creative, and on and on and on in perpetuity. A/B testing is essential because it allows you to test creative copy, images, call to actions, or anything, in fact, to determine which will deliver the best results. This will ultimately set your campaign up for success. Also, keep in mind, this should not be a one-and-done type of test, it should be ongoing.

Why should you A/B test?

Creative testing in is important because it allows you to continually improve your marketing message. This will help determine the best creative and/or message that drives performance based on your KPIs. Secondarily, it reduces ad fatigue so that users do not see the same ad over and over again, which will ultimately become stale.  

How should you set up your A/B test?

Before you begin your creative A/B test, make sure your creative team is on board and that they have the time and resources to do a test. You will be working closely with them for the duration of the test.

Next, establish beforehand what in fact you are measuring. Is it CTR, conversion rate, or CPA? After you have established your KPI, identify your control creative, A, and then build a second creative, B, with only one variable that is different from the control. This can be different copy, a different call to action, a different background color, image, etc. What you can test is endless. Keep in mind that for an A/B test to work properly, it is extremely important to only have one feature that is different from the control – this is also inclusive of purchased inventory. That way, when metrics start streaming in, it will be easier to attribute why a creative performed or did not perform. If there are multiple differences in a creative, it will be extremely difficult to isolate the reason performance was better or worse that the constant.

Another important step is to have your creatives running on even rotation within the same placements. For this test to be successful, everything should be as constant as possible, including ad inventory with the only difference being the one variable in Creative B. This is also inclusive of the landing page; it needs to be the same as well to accurately measure post-click performance.

Finally, statistical significance. You must achieve it for a test to be thorough and complete. I have had a creative test run for 2-plus months because there was no clear winner. If that is the case, you may just have to look at trends and go off that to determine the winning creative. For instance if Creative A was leading for the few weeks, it could be possible to call that the winning creative.

Thanks for reading, and happy testing!