On the Persistence and Evolution of Program Ratings

For most folks, if you’ve heard of “Q Scores,” then you probably think of those survey results telling you how popular and recognizable different celebrities are.  Now, though, Q Scores, and another company, General Sentiment, have partnered to branch out into the evolving territory of audience measurement, with a service called TV Audience Evaluation Reports (TVAER).  TVAER measures “audience involvement and commitment” for individual television programs.  This measure is derived from data gathered from various news and social media platforms, including Twitter, and the results can be broken out along demographic and geographic lines.

In recent media coverage of the latest report from this new measurement service, I was particularly struck by this quote from General Sentiment’s chief executive officer Greg Artzt: “Knowing how many people watched your show is important. Knowing how many people will watch your show next week is revolutionary.”

This is an interesting statement for one very important reason — it suggests that these emerging measures of audience involvement, commitment, engagement, or whatever you might want to call them, are going to be a very accurate predictor of future audience exposure patterns.  And, to be “revolutionary,” they would need to be significantly more accurate than existing tools for predicting audiences’ exposure patterns. 

But what I haven’t seen yet, unfortunately, is research showing this to be the case.  It may be out there (perhaps proprietary) — but as a potential consumer of such data, I would certainly be interested in seeing such a compelling claim backed up with research. 

This point reflects my general feeling that the future of academic audience research/ratings analysis is one in which we begin exploring the interactions between the various conceptualizations of the media audience that are now being offered by different measurement services.  Lots of interesting potential here if we’re able to pry the necessary data free from these new providers.

It’s also important to note that the TVAER service has entered a fairly competitive field, as other measurement firms such as Nielsen and TNS are offering similar metrics that are similarly derived from scraping online conversation and news coverage.  What I find particularly interesting about this rapid growth in this area is the way in which it seems to represent an effort to essentially reclaim some value for individual programs — value that was in many ways lost in the transition to the C3 ratings currency.  Today, television advertisers — thanks to the C3 — are capable of paying only for audiences who are exposed to their ads.  And yet, the audience marketplace still seems to value measurement services that provide more information about the program audiences (rather than just the commercial audiences), about how they behave before, during, and after watching the program.  Clearly, the underlying assumption is that these behavioral dimensions of the program audiences also confer some value on the advertising time within these programs — but here again, I’ve yet to locate the research that spells this out.  In any case, program ratings are persisting, it would seem, as a relevant currency in the audience marketplace, though certainly evolving in new and interesting directions.

This entry was posted in Engagement, Television. Bookmark the permalink.

1 Response to On the Persistence and Evolution of Program Ratings

  1. Pingback: On the Overlapping TV and Online Audiences | audienceevolution

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s