How “test and optimization”-tools became “test or optimization”-tools

March 26, 2015 | By

How-Test-and-Optimization-tools-became-Test-or-Optimization-tools

I have been talking a lot to marketers that use A/B testing to improve their online results and it always strikes me how obsessed they seem to be with the testing itself. So much so, that they almost seem to forget the original objective of their test: drive better outcomes.

Marketers tend to go to great lengths to quickly achieve their goal of defining a “winner” and do so at the cost of many conversions opportunities during the test phase. Typically they would be aware that they need to watch for statistical significance, but don’t want to get into the details of that; as long as they have an “official winner” they believe they will have the best results from that point onward.

This is a good strategy if you were doing your test in a lab. Unfortunately, that is not the case with real-time online A/B testing. When you run a test on a live audience with tens of thousands of potential customers, you will undoubtedly sacrifice many of them to reach the holy grail of a significant winner.

Subsequently, many marketers are unaware that in order to execute your A/B test exactly like the textbook describes the golden rule is to define a sample size in advance and stick to it. As you need to make guesses on the expected conversion rates to set the right sample size, you need to be on the safe side here.

But the trickiest part is this: you should resist the urge to draw any conclusions before your test is completely finished! If not, you have a pretty good chance of drawing the wrong conclusions. Evan Miller wrote an excellent blog: How not to run an A/B test, in which he explains that checking for significance before your test is finished is actually lowering the chance of your final result being right.

The fact that many marketers do not understand or appreciate the importance of statistical significance, I can still understand, as they are trained to be marketers, not mathematicians. What I am really surprised about however, is that some of the most widely used test-and-optimization tools are doing exactly what Evan Miller is warning for: they present an ongoing “significance” check to the marketer, encouraging the marketer not to think of sample sizes in advance and in the end delivering completely unreliable results, thereby reducing themselves to a test-OR-optimization tool.

So what’s a marketer to do? Just go back to your initial goal: you want to maximize your outcomes, directly from the start. This is not a new mathematical challenge, algorithms to do this have existed for decades. Whether it is multi-armed bandit algorithms, Bayesian optimizations or combinations, these algorithms guarantee optimized outcomes, and luckily there are also marketing tools out there that offer this instead of classic testing.

Want to improve your online marketing outcomes? Then forget about testing tools. Head for optimizing tools!

computer icon

See what BlueConic can do for you

Request Demo