In a recent blog post, GrooveHQ founder Alex Turnbull lamented the fact that many of his team’s A/B tests generate inconclusive results. Their experience is a painful one that is increasingly being felt among businesses of all shapes and sizes as A/B testing solidifies it’s place in the web marketing tool suite.
A/B testing is becoming super popular. In the past five years, interest as measured by the proportion of searches for the term “A/B testing” on Google has increased by 500%:
The majority of this growth has come from what is essentially a popularization of 1) ‘lean methodologies’ in the marketing departments of both SMBs and large enterprises; and 2) the advance of affordable, easy-to-use cloud-based software. Directionally, these are promising trends in that they represent an emphasis on using data to inform business decisions. The net benefit of these trends has been complicated, however, by the experience of GrooveHQ and so many others.
We’ve all read at least one blog post which claims to have improved click-thru or conversion rates by some outlandish percent through just ‘one, simple A/B test’ – usually involving some wording in copy or color of a button. It is precisely this hype around A/B testing which has pushed it into the spotlight as an underleveraged marketing tactic, but also encouraged a use of experimentation that more often than not ends up in disappointment.
Soft Eyes: Context and Empathy in Experimentation
The underlying problem with these tests is that they don’t consider the human element as much as they should. Too many people running A/B tests today lack what my favorite character from The Wire, Detective Bunk Moreland, might call soft eyes.
A successful approach to A/B testing requires deep consideration of context with regard to your product, market and most of all – your users. Small cosmetic changes to the UI or copy often result in equally small changes to click-thru or conversion rates, and so these A/B tests require relatively greater levels of statistical power in order to achieve a comfortable level of significance. As a result, these ‘tweaks’ are better suited to mega-traffic companies like Google or Amazon, which can justify the cost of testing such changes and afford to optimize the width in pixels of a border because a small lift in performance has a substantial impact on their bottom line when rolled out at scale.
But for everyone else, ‘shallow’ A/B tests of colors, fonts and the like will often yield inconclusive results. At the worst, this approach to testing can offer a skewed representation of what’s happening and actually misinform decisions. In most cases, these types of tests will present a net negative as the costs of testing associated with setup, management, sub-optimal performance and opportunity are outweighed by some marginal improvement at best. And in the end, a proliferation of meaningless tests strengthens the case against data and experimentation both within individual organizations and in the community of digital marketers, product managers, and data scientists at large.
Alternatively, empathic A/B testing prompts us to ask bigger, hairier questions about people that often require more thought and involve different functions of a company. Here are 9 questions that everyone should ask when designing their next A/B test with empathy:
More often than not, if you start asking these critical questions you’ll find that the answers are not ‘a different button color’ or ‘a different headline copy’. With genuine thoughtfulness, these questions will lead you to design meaningful tests that address the features and purpose of your product. And for the vast majority of an increasing number of ‘A/B testers’ in the world today, these are the types of changes that will really move the needle.