One of the key issues that is discussed at length in Audience Evolution is the way in which both content providers and advertisers are exploring alternative success criteria — which translate into alternative currencies — in the marketplace for media audiences.
Using television as an example, it’s very difficult to sell audiences that are, more often than not, too small for ratings services to even report. Consider, for instance, that only about 80 or so of the well over 500 television networks operating in the United States have average audiences that are large enough for the Nielsen Company to report their ratings. Or consider the Web, where panel-based audience measurement systems such as Nielsen’s can provide ratings estimates for up to 30,000 different Web sites. This sounds like a lot, but is in fact only a fraction of the Web sites in operation. The remainder have audiences that are too small to be reliably measured and reported by panel-based systems. Server log analyses represent an alternative, it should be noted, but have their own weaknesses (consequently, the focus today is on developing approaches that integrate panel and server log analyses).
Given this state of affairs, it isn’t suprising that the marketplace is exploring alternative forms of audience value. Last month, for instance, the Sundance Channel announced that it has collaborated with Nielsen to develop a new metric that “goes beyond exposure and into engagement to help give advertisers a sense of the effectiveness of their branded entertainment campaigns.” This is only the latest in a steady flow of such efforts taking place not only in the realm of TV, but also in radio, print, and online (see Chapter 3 of Audience Evolution).
If you’re interested in an alternative set of criteria being offered by Nielsen and employed by networks such as NBC in their negotiations with advertisers, head over to www.rewardtv.com. There, you’ll be given the opportunity to take quizzes about individual TV program episodes. The more questions participants get right about what happened in the episode, the more “engaged” are that program’s viewers, according to Nielsen, and, presumably, the more programmers can charge for the viewers of those programs. Over the past couple of months, philly.com, the online home of the Philadelphia Enquirer and Daily News, has been analyzing its Web traffic using a seven part equation for online engagement that accounts for things like the exent to which site visitors interact with and participate in the site. Twitter’s much anticipated business model relies on the notion of resonance, which is an indicator of the extent to which users act upon and interact with “sponsored tweets.” All of these examples represent paths away from basic exposure.
At this point in time, these new measures can differ substantially in terms of the specific criteria that they employ to try to determine the value of audiences in ways that extend beyond their size. History tells us that eventually we’ll see some conventional standards embraced by the industry as a whole. But the odds are that we’re never going back to the days when the number of 18-49s in an audience was the start and end point for the worth of an audience to advertisers. For some advertisers this will continue to be what matters most. But for others it will be something more like engagement (however the industry chooses to ultimately define that). For others it will likely be some form of behavioral response.
This, to me, is one of the most dramatic (yet little discussed) developments taking place in the media sector today — the rise of alternative success criteria that can co-exist side-by-side with traditional success criteria such as exposure. What makes this possibility particularly interesting is the preliminary evidence we’re seeing that there isn’t necessarily a strong correlation between different success criteria. That is, content that’s a “hit” according to traditional exposure criteria doesn’t necessarily perform well according to other criteria such as engagement. And content that performs poorly according to traditional exposure criteria often performs quite well according to alternative criteria (hence, for instance, the very niche-y Sundance Channel’s interest in developing an engagement metric as a counterpoint to its low ratings).
I discuss a number of examples along these lines in Chapter 5 of Audience Evolution; and there are additional very recent examples that merit discussion. According to a recent report by the media agency Optimedia, a number of shows that generally rank in the 60s in terms of Nielsen ratings, such as The Office, Family Guy, Glee, and Heroes, rank in the top ten according to the agency’s Content Power Ratings. These alternative ratings employ criteria that account not only for cross-platform viewership, but also for things like advocacy and involvement, the latter two of which are measured via online activities such as the number of Facebook fans and the quantity of online discussion. But then there are still programs that are hits no matter how you slice it. American Idol, for instance, tops the charts whether the criteira employed are traditional ratings or Content Power Ratings.
From the standpoint of an audience researcher, these strike me as very interesting times to start trying to sort through these alternative success criteria, to examine how they relate to traditional success criteria as well as how they relate to each other. It’s essentially an opportunity to reboot the entire field of ratings analysis, which for years developed a wide range of incredibly valuable insights and theories about the factors that helped to explain and predict the size and demographic composition of media audiences.
Now, there are whole new sets of criteria, ranging from engagement to recall to appreciation to involvement, all of which represent starting points for new programs of research that try to predict and explain performance according to these new metrics. And then, of course, there is the equally fascinating question of how these different performance criteria interact with one another. For instance, when (and why) do programs’ exposure and engagement performance correspond? And when (and why) are they sometimes vastly different? These questions represent fascinating next steps for those of us researching the evolving relationship between media organizations and their audiences.