Traditionally, when we’ve talked about audience ratings, we’ve talked about the kind of data on audience size and demographics reflected in the exposure-focused measurement services provided by firms such as Nielsen and Arbitron. As I’ve been discussing quite a bit in this blog, a growing number of alternative approaches to audience value are emerging that are much less concerned with audience size and demos. Here, for instance, is a recent and informative discussion with an analyst at General Sentiment about their two measures of audience value, the Involvement Index and the Emotional Bonding Q. Assuming that one or more of these emerging measures takes hold in the audience marketplace, the next step for those of us who do academic audience research is to begin building a body of predictive theory and research along the lines of what has developed in what we’ll call traditional ratings analysis.
Traditional ratings analysis approaches to audience research have yielded a wealth of valuable insights into the patterns of audiences’ media usage that have helped in the formulation of strategies and tactics for for attracting, retaining, predicting, and valuing audiences. This research has taken into account a variety of factors in order to better understand audience behavior, ranging from content characteristics (program type, quality, etc.) to structural factors (scheduling, competition, range of choices, etc.) to audience characteristics (e.g., demos). (For vital overviews and assessments of this research, see two books by Jim Webster and his colleagues, The Mass Audience and Ratings Analysis.)
So, needless to say, those of us who do audience research need to start thinking about the structural, audience, and content factors that can help to explain and predict how individual progams will perform on contemporary criteria such as engagement, involvement, buzz, etc. I’m starting to try to formulate some thoughts along these lines and to (most important) obtain some of the relevant data from some of these measurement firms in order to initiate this line of research.
Below are some questions I’ve formulated thus far in trying to outline some initial directions for this research agenda.
What is the relationship between various program types and engagement, involvement, etc?
There seems to be reason to believe at this point that reality programs tend to perform better than scripted programs; but are there more extensive systematic program type effects? Sitcoms v. dramas? Are there measuers of quality that might be useful for understanding audience engagement/involvement? I’m thinking, for instance, of the program testing that networks have long conducted that has proven to be a somewhat helpful predictor of a program’s traditional ratings performance? How do these pre-tests perform in terms of predicting the engagement and involvement indicators being provided by measurement firms?
What is the relationship between scheduling factors and engagement, involvement, etc?
Certainly, in a DVR/On Demand era scheduling is less of a factor, but I bet scheduling still plays a role in the level of engagmenet/involvement demonstrated by audiences. To what extent is engagement/involvement a function of dayparts? Do we see lead-in or lead-out effects? That is, might one program’s engagement level be a function of the engagment level of the program that precedes or follows it? Might engagement/involvement levels be a function of the day of the week? (maybe we’re more or less engaged with programs we watch during the week vs. the weekend).
What is the relationship between consumption platform and engagement, involvement, etc?
This is obviously a new level of analysis that really didn’t concern traditional audience researchers. And I don’t know yet the extent to which the emerging audience measurement services are able to parse out their data according to consumption platform. But one can certainly imagine that the nature of the platform used could impact the extent to which an audience member feels/demonstrates engagement, involvement, etc. Might engagement/involvement, for instance, be a function to some extent of whether the content was consumed via a live versus an on-demand platform?
We already have a fairly good idea that, in terms of the measures of audience engagement and involvement that are being introduced into the marketplace, program performance is very much a function of the demographics of the program’s audience. Older television viewers, for instance, are just not as likely to blog and tweet about their favorite programs. Recent research by Nielsen provides some demographic data on the types of people who are more likely to discuss television programs online. No real surprise — this activity tends to skew young and male.
However, learning more about these patterns and their relationship to individual programs — or program types or networks — could ultimately provide very valuable insights into how the representation (and under-representation) dynamics of these new measurement systems might mirror or deviate from the representation and under-representation dynamics present in traditional ratings panels. Such patterns obviously can have dramatic implications for the nature of the programming that is produced.