The Latest Content Power Ratings

I’ve got a bit of a fascination with Content Power Ratings, a proprietary performance metric for television programs created by the media buying agency Optimedia.  Late last week, they issued their most recent annual Content Power Ratings Report.

The Content Power Ratings are probably at the forefront of drawing upon the wide range of data streams made available by the Web and social media to develop new, much more multifaceted, portraits of media audiences.  The CPRs focus on television programs, and seek to measure them across three dimensions: Audience delivery (across TV, web, mobile), involvement (defined as awareness of and loyalty to a program), and advocacy (defined as overall levels of conversation, PR activity, and personal recommendations).

The range of of data sources that are utilized to construct the Content Power Ratings includes:

Optimedia’s primary research

Nielsen Media Research’s NTI database

Nielsen Online’s VideoCensus

comScore’s Media, Video, and Mobile Metrix

e-Poll’s ProgramPulse

Dow Jones Factiva

Google Trends

Facebook

Nielsen’s BuzzMetrics

Twitalyzer

Klout

The exact recipe for how all of these different data sources are combined is, of course proprietary, as is the exact nature of the Optimedia primary research that factors into the analysis.  Here, you can find Optimedia’s presentation of their latest report, where they highlight not only the top CPR performers (shows such as American Idol, Glee, and Dancing with the Stars top the charts), but also illustrate some interesting instances of programs that perform much better in their Content Power Ratings than they do in their traditional Nielsen ratings. Take for instance, Glee, which is ranked second according to the CPRs, but is the 55th ranked show in terms of Nielsen ratings.

The presentation also makes clear that individual programs can be broken down according to the multiple critera that go into the creation of the CPR. The presentation, for instance, notes that Glee ranks second in search volume and first in press mentions. South Park ranks first in terms of online video viewing. Saturday Night Live is first among late night programs in online buzz.  The point of these examples is that the range of criteria that can be employed to assess the performance of a program continues to expand.  As an advertiser, perhaps the overall CPR isn’t as important to you as one or more of the many subcomponents that go into it. And different advertisers might value different subcomponents.  My hope is that as a diversity of success criteria become integrated into the audience marketplace, this will help support a greater diversity of programming.

All this being said, though, it’s important to recognize a crucial underlying premise.  As Optimedia CEO Antony Young said in a recent Ad Age editorial, “While shows with big audiences still have big value, the average rating of a top 10 prime-time show is almost a third of what it was 25 years ago. And that has implications for the way advertisers should evaluate TV and how networks should develop their schedules. We want TV to deliver more than eyeballs. We want it to generate, among other things, greater involvement with consumers.” 

The point implicit in these statements is that now that most TV shows don’t deliver large, inherently valuable audiences, the time has come for all involved to care about things other than size.  Of course, if engagement and involvement relate to advertising effectiveness (and thus have value) today, wouldn’t they have related to advertising effectivenss (and thus been valuable) in 1985, or 1995? 

Certainly the Internet and social media weren’t what they are now back then, and so the nature of the data available has certainly changed.  But if the marketplace truly demanded it, couldn’t somebody have provided some kind of measurement system that tapped into these other valuable dimensions of the TV audience?

Somebody did. Take a look at this late-80s report from the Markle Foundation, describing an initiative they funded during that decade called Television Audience Assessment, which measured audience appreciation of television programs and even demonstrated a clear linkage between program appreciation and commercial effectiveness. 

There’s a quote in the report from a large national advertiser that could have been said last week about the Content Power Ratings: “It [TAA] could redirect us from simply large audiences to selected audiences that may be more receptive to our message.”

However, advertisers and programmers rejected Television Audience Assessment wholeheartedly. The initiative became little more than an historical footnote in the history of audience measurement.

Do we conclude from the failure of TAA that the current interest in looking beyond exposure is merely an effort to manufacture value where little really exists? Or (and this seems more likely) do we conclude that, when a marketplace is operating to everyone’s satisfaction, even if it is not operating at maximum efficiency, genuine innovations and improvements are likely to be ignored.  The status quo is a tough thing to disrupt.  But now that technology has irrevocably disrupted the status quo, the bottom line is that everyone’s a lot more willing to reconsider what makes an audience valuable than they were in decades past.

Advertisements
This entry was posted in Currencies, Television. Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s