Why Is the Success of Mobile Apps So Difficult to Measure? Critical Issue in Audience Measurement Unraveled

Audience measurement is well understood and routinely performed by firms such as Nielsen (TV), Arbitron (radio) and Comscore (web). It would appear to be a no-brainer for these companies to jump on apps. But, clearly, that has not been the case. What gives?
This post was published on the now-closed HuffPost Contributor platform. Contributors control their own work and posted freely to our site. If you need to flag this entry as abusive, send us an email.

As advertising executives convene in New Orleans for the 4A's Transformation conference this week, a pressing issue facing the industry is advertising on mobile, and, specifically, the vexing problems with executing media planning and advertising campaigns on mobile apps. While there is widespread consensus regarding the attractiveness of the app medium to reach a large (>800MM globally), diverse, global and engaged (70% of user time on smart devices is spent on apps, challenging time spent on TV) audience, advertisers remain skeptical regarding the effectiveness of in-app campaigns. Indeed, in-app advertising remains mostly performance based (largely apps advertising in other apps to power downloads), with brand advertisers largely on the sidelines.

A primary reason cited for this is the lack of audience measurement data in apps, preventing media planning and campaign targeting commonly employed in display and TV advertising. At first glance, this is puzzling. Audience measurement is well understood and routinely performed by firms such as Nielsen (TV), Arbitron (Radio) and Comscore (web). It would appear to be a no-brainer for these companies to jump on apps. But, clearly, that has not been the case. What gives?

Panels are to blame. It turns out that virtually all audience measurement relies on panel-based techniques, which work as follows.

1.Instrument gateway systems, e.g., browsers (web) and cable boxes (TV).
2.Observe behavior of panel, and, finally,
3.Extrapolate panel's behavior to the larger population.

Sounds great, except that panels don't work for apps. All panel-driven techniques rely on the idea of popularity persistence:

1.Given a large number of choices, humans will gravitate towards a small set of popular items, and
2.An item that is popular today, is going to be popular 30, 60, 90 days from today. In other words, popularity is persistent.

Both of these hold for traditional media, like TV and Web. A popular web site (e.g., cnn.com) is very unlikely to vanish off the map in 60 days. Same for TV shows -- if "American Idol" is popular today, it will be popular 90 days hence. If one were to track the top 20 TV shows in the U.S., about 15% (three shows) would have churned over in 30 days and approximately 35% (seven) in 90 days. In other words, on average, about 13 out of the 20 shows would be in the top 20 after three months. Noting that many popular shows turn out to be one time broadcasts (like the presidential debates, Grammys, Oscars and sporting events), popular regular-broadcast TV shows hardly demonstrate any churn.

In the app world, popularity models are radically different. At Mobilewalla, we have accumulated the largest volumetric database of app market data in the world, and have been employing innovative big-data techniques to glean insights from this data. One of the surprising discoveries we've made is that app popularities do not persist. Considering the top 100 apps in the iTunes store at a given time, about 45% churn over in 30 days and 85% in three months. Interestingly, many of the persistently popular apps turn out to be reference apps (such as Google Maps, Dictionary.com, or Yelp). In categories such as games and lifestyle, volatilities are stunning -- in some game categories, 50% of the top 100 apps churn over in seven days. By any measure, app popularities are highly transient.

An important impact of this popularity transience is that panels cannot be used to measure audience. No selected group of users, however carefully empaneled, can keep up with the core underlying volatility of apps and serve as a representation of the wider population. A possible solution might be to frequently change the panel, but aside from the substantial overhead of this task, it would also violate the fundamental statistical properties upon which panel based judgments are performed.

Our data and behavioral scientists in Mobilewalla have been bullet-focused on this problem -- inventing reliable and scientifically rigorous methods of audience estimation employing non panel-based techniques. We are demonstrating and launching this capability at the 4As Transformation Conference this week. Stay tuned.

Popular in the Community

Close

What's Hot