The term “lazy assignment” is usually associated with computer programming, but as applied to A/B and multivariate testing experiments, it’s become a useful term to describe a method of test segmentation.
Evan Miller describes it beautifully on his blog, and we’d like to expand on that a bit in terms of how beneficial it can be when designing tests for apps and games.
Leveraging data to make better and more efficient decisions is the core of our philosophy. It’s why we built Splitforce. Any tool or technique you can use to squeeze even a tiny measure of extra efficiency out of your experiments can pay off big in the long run. With that in mind, it’s important to know that in many cases you can dramatically increase the efficiency of your tests by making one simple consideration in its planning phase: Assignment.
Why Does Assignment Matter?
Users are a precious and finite resource. When you’re running tests to optimize your app, each visitor presents a learning opportunity. Input volume requirements for certain tests can be a significant challenge, especially for operations with less traffic to work with. By reducing the amount of noise in your tests, you can decrease the user resources that it requires to reach statistical significance.
By lowering the amount of subjects your test requires, it can be completed more quickly, and you can move onto developing and testing your app’s next improvement. The more validated improvements you can make to your app, the better it will perform for both you, and your users.
Leveraging Lazy Assignment
It’s a fairly simple principle in practice: don’t include visitors in the test unless you expect them to interact with the variation you’re testing, because they’ll lower your base conversion rate unnecessarily.
Miller’s example alludes to a hypothetical design team testing out a new button on a website’s shopping cart page. If they were to assign every website visitor an element of the test, visitors who never even reached the test page would still be referenced in the test, and an unnecessarily large number of “fails” would be counted, lowering the base conversion rate considerably. That adds a bunch of noise to the results. Instead of assigning every website visitor to the test, a team could dial in their results more accurately by including only those visitors who actually made it to the shopping cart page. That way, only visitors with a significantly higher chance of converting will be tested.
If you’re curious about exactly how the nuts and bolts of lazy assignment come together, Miller provides a detailed explanation in his Mathemeatical Appendix
Think about your funnel: what is the ultimate element or goal you’re testing? Are you including user segments that aren’t relevant to the test at hand? Make sure you’re including only users you expect to see your variations, and you’ll narrow the field, reducing the number of participants you’ll need for your experiment.