Mobile App Analytics & A/B Testing

6 Metrics Mobile Apps Should Be Optimizing

Making a living from mobile apps is tough, and getting tougher. Last year, Gartner Research forecasted that 94.5% of all mobile apps will be free to download by 2017.

Mobile-Apps-Free-Gartner-Forecast

But the advent of mobile apps that are connected to the Internet has made for new opportunities to tap into data to build apps that are more delightful for users and more profitable for businesses. Thus far, the majority of marketers’ energies and budgets have been focused on boosting downloads. But in a world where free apps increasingly reign supreme, a redirection of resources will need to take effect towards post-download analytics.

Predictably, a host of tools have emerged to track usage and post-download behavior in native mobile apps. ‘Vanilla analytics’ platforms like Flurry or Localytics do a great job of answering questions like ‘Who are my users?’ and ‘What are they doing?’ But they fall short in providing an adequate answer to a central question in actionable analytics: ‘What works better?’ 

The power of data lies in the power of experimentation and observation: And in order to find that magic mix needed to monetize free downloads into profitable mobile apps, app makers will have to question their assumptions and adopt a hypothesis-driven approach to marketing and product development itself.

Here are ten mobile metrics that will help both gaming and mobile commerce apps guide their optimization strategy:

1 – In-App Purchase Conversion Rate. Clearly, keeping tabs on the rate at which free users are monetized should be the primary goal for any app which monetizes through in-app purchases.

2 – Average Purchase Amount. Increasing the amount of revenue earned per in-app purchase can be a key revenue driver that deserves regular monitoring.

3 – Average Revenue Per User (ARPU). Understanding how much revenue your app earns from one user on average  will help gauge the health of your app’s monetization strategy relative to costs. ARPU is also a building block in the calculation of customer lifetime value (LTV) – a key metric for any business.

4 – Average Session Time. This refers to the average amount of time users spend in your app. In the context of optimization, how you measure this metrics will depend on your app. Longer session times may be a good optimization target for a social game that is looking to increase user engagement, whereas shorter session times may be better for a productivity app that is seeking to help you get something done faster – like sending email or scheduling a calendar event.

5 – Retention Rate. If you’ve paid to acquire those new installs, you’ll know it’s important to optimize the rate at which you keep them active and engaged. Look at both short-term and long-term retention by examining the number of returning users in Day 1, Day 3, Day 7, Day 30 and beyond.

6 – Lifetime Value (LTV). At the end of the day, lifetime value is all that matters. In order to be successful in any business, you’ll need to understand how much value to expect from a newly acquired user on average – and then leverage that information to set your marketing budgets.

A/B testing is a proven and rigorous methodology by which to test new hypotheses you have about how changes to your mobile app will affect these key metrics. When A/B testing, you should be aware of a couple additional metrics that will help you answer the question of what works better:

95% Confidence Interval is a range of values which describes where the true performance of a test variation may lie with 95% certainty. It is calculated as:

+/- 1.96 * SQRT(P * (1-P)/n) for metrics measured as a proportion (0<p<1)

+/- 1.96 * Standard Deviation of the Mean / SQRT(n) for metrics measured as an average (0>u)

Observed Improvement is the percent change in the metric you’re measuring. It can be calculated as:

(New Value – Old Value) / Old Value x 100

Chance to Beat Original is the level of significance in the result. It takes into account the distribution of the difference in observed performances between the original and test variations.