Exposure Or Engagement? What passes for Engagement measure more often propaganda than research.

I was at my sister’s lake house this past week with my siblings and their children. Their six and my two made for a loud weekend. So I went fishing. When I returned I had apparently missed several calls on my cell phone — all but one from the same large advertiser.

“What do you think of engagement metrics?” she asked when I returned the call.

“It depends on how much reliance you place on them,” I replied. “Why?”

“We are negotiating a deal with program engagement metrics.”

Ah … engagement. No one can argue against engagement metrics, can they? In theory, engagement is a great concept. In the case of program performance, research vendors often attempt to answer a question like, “Are networks providing vehicles that keep viewers engaged throughout the program?” For advertisers they often attempt to answer the question, “Are advertisers creating messages that viewers easily recall?” But in practice, these questions are often answered based on questionable research practices.

Commonly, research vendors recruit households to be a part of a panel. Ignoring the inherent problem of sample size, panels can be cost-effective vehicles for a wide variety of research projects — but engagement or recall is not one of them. By subjecting a panel to routine questioning about television content, viewing behavior is affected. A byproduct of the process is that panel members get better at recalling information. That learned behavior — brought about by the measurement process — is unique to the panel and not only undesirable, but can dramatically affect the efficacy of the research in question.

Regarding program engagement, the problem can be more pronounced. Not only do panelists get better at recalling programming elements — they get better at anticipating the questions. Researchers often ask about plotlines. Panelists can obtain that information on numerous Web sites. No one likes to be asked questions to which they have no answer. It is human nature to do what is necessary to avoid appearing stupid.

So what did I advise the advertiser? Be smart, ask questions and leverage your position. I suggested she should ask anyone quoting engagement metrics the following:

1. Qualify the research. How does the behavior of the audience in question compare to the baseline? How was the baseline calculated? Ask for the full distribution of engagement metrics. If 85% of the programs or commercials are above the baseline, bells should go off in your head.

2. Ask for detail. What questions were asked of the panelists who watched the program or commercial? Were the questions leading or open-ended? How was the panel created? How big was the panel? How many panelists asked questions about the program you are interested in? How was the panel compensated? Was there an incentive for providing more information or watching more television?

3. Demand quantitative analysis. What percentage of the viewing audience watched the commercials? Ask the media seller to “detrend” commercial ratings when the rating for the program was higher in the end than it was in the beginning. Ask for set-top-box data analysis of both commercial pods and programming where and when it is available.

Remember to be skeptical when dealing with engagement metrics. The jury is still out on the processes and approaches vendors use. I would characterize much of what I have seen as propaganda, not research. If the guarantee represents a small percentage of the buy, then the exposure is limited. But if the guarantee is substantial, let the buyer beware.

by Frank S. Foster
Frank S. Foster ( fo**********@ev************.com) is a senior consultant at EVAD Consulting and is currently working with TNS in a strategic capacity related to its television audience measurement efforts. Prior to EVAD, Mr. Foster was the co-founder and president of erinMedia.
Courtesy of http://www.mediapost.com

Skip to content