BLOG

Never. Stop. Testing.
2015.03.18

I raised an eyebrow when I saw the title of a FierceCMO opinion piece: 3 reasons to stop A/B and multivariate testing.

 

It’s not that I hadn’t heard objections to testing before. Before coming to us, many of our clients found testing difficult to do correctly and prohibitively expensive. But no one we’ve run into has thought testing was a bad idea per se.

 

In this data-driven world, I would have thought everyone had run across an outcome of testing that was counterintuitive. Or, at least, a mild surprise. Who is so bold to claim they know which approach will outperform all others, in all cases?

 

I’d propose that you only stop testing if you are content with your campaigns’ results and your own career trajectory. And if the rationale given in this post makes you want to stop, you might be doing it wrong.

 

Let’s take the objections point by point.

 

“ʽA/B and multivariate testing involve sending traffic to poorly performing pages as a part of the testing process, in order to determine which pages are the best-performing,’ Dan Gilmartin, CMO at BlueConic, pointed out.”

 

In a well-designed A/B test, you may be testing a high-performing page against an iteration in order to understand artifacts that influence your audience. If your original experience is a good one (and it should be) there will not be a “poorly" performing page in the test.

 

“By definition, A/B testing means showing certain sections of your audience lesser content.”

 

Not lesser content, just different. For example, if we find that using the word “FREE” in a headline outperforms the control (identical headline without “FREE”), then we can stand to gain more conversions. Testing imagery can also impact conversions. These small changes can have significant impact, and it’s a stretch to say, in this case, there is lesser content.

“If you're consistently putting out excellent product, maybe the difference between your best and bestest content is negligible.”

 

"Negligible" can be a dangerous word when discussing increased conversion rates. “Negligible” is related to the overall traffic number. A 1% lift in conversion for a site that gets 10 million hits is significant, especially in RPL models.

 

“It doesn't put your best foot forward.”

 

If you have a best foot, by all means put it forward. The fact is that most brands don’t always know what will work or won’t. Even when they think they have a “winner,” they fail to imagine the intricacies of the consumer experience.

 

“Instead [of testing], marketers should consider real-time optimization.”

 

A/B testing and real-time optimization are not mutually exclusive. Real-time optimization is great — but it’s not a multi-channel solution.

 

What’s more, after a test, you can regret that you have sacrificed some views and opens — or you can rejoice that you now know what does and doesn’t work. You learn for next time. You are in a process of continuous improvement.

 

The risk isn’t nearly as high as the poster suggests. We typically recommend clients put 80% of their investment into their tried-and-true approach and invest 20% in R&D and testing. This provides balance for hitting the sweet spot and allows you to improve. We’d argue that the risk is in not testing.

And we’d hardly settle for just two tests, A and B. We recommend multiple simultaneous tests, within the 20%. The more tests, the more you learn. And once you have a winner you don’t stop testing. You continue to invest 20% (not 50%).

“It wastes time.”

It’s not difficult to monitor the lift in conversion vs. duration and make the call to switch to the higher-performing creative when the lift is significant. Real-time A/B platforms will do this automatically. The "extended time to conversion” would be made up with the higher-performing iteration. The A/B test after that (you’d never stop testing) could result in further improvements.

 

If your testing strategy takes too much time, I think that is an indictment of your agency, not testing. We can get results very quickly and change things on the fly — if not always in real time.

 

“It's limited.”

The author argues that testing strategies “test the average online visitor, not any targeted individuals or specific segments.” I’m not sure why he believes this. With proper segmentation and media planning, you can test all performance variables — creative, segments and offer.

 

In the end, we can agree that “Marketers need to aim instead for defining audiences and segmenting them into several complex categories that can yield a much deeper understanding of who the customer is and what they need, making it possible to better target messages, offers and content.” 

 

You can see that we’re passionate about testing here at HackerAgency! We’ve been doing multivariate testing since our beginnings — long before digital — and consider all the learning from it essential to our success. We’d truly hate to see anyone dismiss testing as dangerous or wasteful when the data says exactly the opposite.