Broadcasters and operators naturally want to grow audiences but failing that they at least want to be able to predict what they will be. Now there is accumulating data suggesting they can do both of these, by measuring the emotional engagement of their audiences.
It is more accurate to talk about engagement or immersion rather than raw emotion, because most of the time viewers are not cursing their team’s inept performance or having sleepless nights over a late horror movie. However, many people will find their mood has changed subtly through what they have watched and that has been well known to TV advertisers for years or even decades. The difference now is that social media coupled with online services provides a mechanism for assessing audience reaction and emotion in near real time, giving service providers as well as advertisers the chance to measure impact and where relevant respond accordingly.
For advertisers it can help match particular brands or products to the content likely to generate the best response, whether this is recollection, some call to action or an ultimate purchase. For some OTT or SVoD providers it may help determine which shows to acquire or remove and even how to select the content that will push up subscriptions the most and combat churn. For broadcasters and content originators it can help identify the right material for given target audiences at the outset and even determine which shows to produce, as Netflix has already succeeded in doing from customer feedback. Finally for the major studios there is scope for maximizing revenues across different markets and monetizing under-performing shows better.
But by drilling deeper and exploiting fine grained data close to the events, it is possible to go further and increase audiences on say an episode by episode basis, or for the next match in the case of a live sports service. This potential was identified by a recent survey conducted jointly by Canvs, which calls itself an emotion measurement specialist, and Georgetown University’s McDonough School of Business in the US. The study was valuable because it gave clear pointers to how an operator could develop a successful social TV strategy to maximize viewers, especially over the course of a series, as well as clues for further research that could yield even more granular predictive data.
The study was based on data collected from 55 scripted TV programs comprising 621 episodes that aired in the winter 2017 season in the US on the five broadcast networks CBS, ABC, NBC, FOX and the CBS/Warner Bros joint venture CW. Tuning data was combined with social TV activity from Canvs, whose platform has been used by Nielsen and Twitter TV ratings to measure social media activity for TV programs.
The hypothesis was that users’ subsequent activity is related to the degree of “narrative immersion” achieved by the content and that this can be measured roughly by analyzing the associated social media activity when that exists. Users were split into high, medium and low levels of narrative immersion according to whether their tweets were linked to characters, plots or just general comments about the content. For example, the general tweet “@RansomCBS… great TV show” would indicate low level of immersion, but still some, while the cast-related tweet “@CarloRota OMG you play the ‘bad guy’ just perfectly. Bravo! @RansomCBS” was deemed to show moderate immersion. Then a tweet relating to a character such as “kate better not cheat on toby im gonna be so upset #ThisIsUs” would indicate high immersion. Such indicators may not be entirely psychologically accurate, but they do seem to work for prediction.
The next task was to associate levels of immersion with viewing data on the basis of six independent variables: the relationship between successive episodes, program length, genre, day of week, start time of day and which of the five networks it showed on. There was a clear relationship between immersion and each of these variables, so that for example higher levels of total viewership, live viewing and time-shifted viewing of an episode on the same day tend to be associated with higher total viewership of the next episode.
This particular finding was expected since it is explained by the buzz surrounding a program contributing to increased interest in the next episode. However, the authors claimed to be first to analyze how the timing of time-shifted viewing affects program popularity. The indication here was that the earlier viewing of an episode, whether live or time shifted but on the same day, led to higher total viewing of a program.
On the emotional front, the study was building on earlier work by Canvs and others. Almost two years ago in March 2016, Canvs collaborated with media agency Starcomto to assess how emotional reactions to content, as measured by a similar analysis of tweets, affected viewers’ ability to recall TV commercials aired within the programs. This found that people registering a strong emotional response were 48% more likely to recall an embedded ad than those scoring significantly lower. This might sound nebulous, but a clearer finding was that viewers sending tweets about the show while it was airing, which could be said to be a surrogate for an emotional response, were 62% more likely to recall an ad than those who did not engage in Twitter.
Such studies have been criticized for lacking rigor and being based on non-standard methodologies, with disagreements over definition and range of emotions. But Canvs has stuck at its game and enhanced its algorithms. It uses a proprietary text analysis algorithm to assess if a tweet contains words, phrases, or emoji icons associated with over 40 emotions. By this analysis, on average about one third of all tweets associated with a particular program are categorized as emotional. These in turn are broken down by content of the post into the categories cast member, character, guest star and executive producer, as well as general.
The main finding is that the more tweets deemed emotional that were posted around a given episode, the greater the gain in live viewing of the next episode and that this uptick was sustained throughout the series. There was a corresponding increase in total viewing of subsequent episodes including time-shifted, but it seems that the clearer impact was on live, suggesting greater impatience to view the next episode. It also boosted time shifted viewing of the episode just aired for three days afterwards. Of course, this applies to series that go out at scheduled times and not to those made available all at once for binge viewing if desired.
It is probably more helpful for operators and content producers to have some clear metric based on tweet types rather than attempting to translate these into emotions. On this count the finding that tweets expressing views about talent, or people in the show, had double the impact of general tweets about the content, is relevant.
The study begged various other questions that could be addressed by further surveys, such as comparison between the impact of user-generated social media activity and say tweets planted by the operator or even participants in a show. This could help calibrate the impact of an operator’s social media strategy.
The study was confined to aggregate data of live and time shifted viewing defined just by whether or not the latter occurred either in the first three days after airing or after that, so it lacked great temporal granularity. It did not embrace device level data which would add multiple further dimensions. For example, as the authors admit, that would have enabled both device categories and individual households to be calibrated in various ways, including their relative levels of time shifted and live viewing. Furthermore, as data is collected from targeted advertising trials and deployments, success of particular messages or approaches could be broken down by device type, genre and demographic, or even finer grained than that.
Analysis of audience responses to content at emotional levels have been conducted almost since the dawn of TV, but it is social media and the ability to respond interactively that has made it possible to get feedback almost instantly, while action can still be taken.
There are some other firms into this emotional analytics game, such as New Zealand based Parrot Analytics, which conducts what it calls mood analysis, derived again from textual analysis of social media activity. However, this data is trawled from forums, fan, critic ratings platforms and community-based sites, as well as twitter. From this the firm compiles an opinion index which can be positive or negative, claiming that the value for a given episode correlates with demand for the next one. It may not be quite clear how operators and content producers exploit such data, but there must be potential for manipulating or stimulating social media activity to boost viewership. Similarly, advertisers can tap in, given evidence that social media activity correlates in various ways with recall, engagement and action.