How to Build an Attribution Model for TV in 2020

The following is republished with the permission of the Association of National Advertisers. Find this and similar articles on ANA Newsstand
 

By Matt Collins

A year ago, Simulmedia hosted an event on how to devise a winning creative strategy and the impact effective creative can have on campaign outcomes. (Spoiler alert: creative matters every bit as much as media strategy.)

Guests came away knowing how to prepare creative for A/B testing, how to think about managing creative costs, and balancing brand-building and performance. The audience was engaged, the evening sizzled with insights, and attendees topped it off with enough charcuterie to stock a very robust cheese cave.

But beyond all of this, one audience question from a digital marketer really stood out: “How can one attribute the outcomes of a TV campaign?” After all, it’s not yet possible to tap TV screens to access a product web page like it is on a computer or mobile device. Linear TV ads don’t include the capability to track individual users from their family rooms to their favorite e-commerce sites and the shopping carts they fill.

Attributing the results of linear TV advertising is different than digital advertising, but it’s very possible to measure the impact TV campaigns have on consumer behavior and what can be done to improve a campaign’s effectiveness.

To help, here is a step-by-step guide for how to create one’s very own TV attribution model.

Digital vs. TV Tracking and Attribution

The task of attributing digital media to a particular outcome typically begins by deploying a pixel or cookie. Both are pieces of code that live on a website or mobile app and give an advertiser or publisher the ability to know when users have engaged their digital properties, in what ways, and, often, what other sites or apps they’ve used. This is how, for example, an advertiser can know if users who clicked on their Facebook ad, viewed their products, and added something to a shopping cart, also ultimately decided not to buy.

Attribution on TV is different.

There are two ways advertisers can measure TV’s impact. Media mix modeling reveals the relative contribution and value different marketing channels have to sales. This model requires a lot of data and can take months to build and operate, which limits how quickly an advertiser can make changes based on the model’s findings. For advertisers who want a faster read on campaign attribution, they can use automated content recognition data to match ad spots with the programs in which they aired. They also can drop a pixel on their websites or partner with a mobile measurement partner to see when certain users visit their sites or use their apps.

Armed with this data, an advertiser can create a graphical overlay that shows how site traffic or app usage changed in response to the ads that aired. They then can compare site traffic or app usage that occurs within a set time range (e.g., five minutes after an ad has aired to when no ad aired). This enables them to estimate if an ad elicited significantly more traffic or usage than what’s normal for any moment in time. Advertisers often call this a spike analysis, since it typically involves looking at a spike in a graph that shows digital activity during a campaign.

Using spike analysis, a TV advertiser won’t be able to know who’s watching, but they can measure the lift in overall digital usage attributable to when an ad aired on TV. This can be extremely valuable.

Going Under the Hood

The following six steps show advertisers how to attribute the results of their TV campaigns and get to the root of what makes those campaigns tick by building their own model.
Partner Message

These steps may reveal a few leaks in advertisers’ models. The good news is that routine maintenance can improve the way these models work, thereby helping advertisers generate growth and certainty about campaign performance.

1. Select the unique business outcome that matters most to campaign success.

While TV is useful for achieving many different results, marketers should be specific about how they’ll assess campaign performance before they begin. Driving site traffic or app downloads has become an increasingly popular objective.

“Regardless of the outcome a brand picks prior to building the model, they’ll want to ensure that their provider is flexible enough to track multiple outcomes, such as incremental reach, unique reach, changes in awareness, and increased digital activity, like web traffic or in-app purchases,” notes Alex Papiu, Simulmedia’s senior data scientist.

Some brands shift interest from awareness to increasing site visits mid-campaign, so marketers should select a provider that can guarantee in-flight flexibility.

2. Ensure measurement tools are set up correctly to track the desired business outcome.

There are a few ways to securely integrate mobile analytics with TV. “I recommend looping in reputable mobile integration partners such as Appsflyer or Kochava for app insights, and web engineers for pixel placement and Google Analytics,” Papiu says. “These integrations are relatively quick and often free. Plus, brands can ensure the data is accurate by looking at Google Analytics numbers and numbers from the pixel to see if they match.”
 
3. Create a dynamic baseline that defines what “normal” looks like for a particular outcome.

If a brand advertises on TV, chances are they also run ads on search and social.

“Running simultaneous ads on multiple channels can obscure attribution, often giving one channel or another more credit than it deserves,” Papiu warns. “Establishing a dynamic baseline is the most accurate way to separate the signal from the noise.”

Papiu recommends establishing a 90-minute baseline that shows what normal looks like at any given point in time for a particular outcome of choice. Using weighted linear regression, marketers can monitor visits 45 minutes before and 45 minutes after spot air time. The closer the visits are following the spot air time, the higher the weight.

“If a brand is younger and in a high growth stage, or if a brand is seasonal, a data team can help prepare baseline models to determine what normal traffic to a website or app looks like at any given point in time,” Papiu says.

4. Identify upper and lower bounds using statistical significance.

Baselines should include a confidence interval. This will help brands understand how many events are likely caused by random variation as opposed to something as potentially powerful as a TV campaign or another major media investment.

For example, “By establishing a 68 percent confidence level, one can expect that 32 percent of events will be outside of the range of confidence just by chance,” Papiu explains. “This would mean that they’re more likely caused by something else, such as the media a brand has run. Or, on the other end, if a spike is above the upper bound, one can be 68 percent sure the spike is attributable to TV.”

Historically, the spike returns to the baseline in less than 15 minutes following a spot’s air time.

5. Set a spend level that will deliver enough payload to get a read on the campaign’s effectiveness.

“To gain significant signal on upper funnel metrics, such as site traffic and app downloads, brands should plan to invest at least $50,000 per week on TV advertising,” Papiu says.

To measure lower funnel metrics, such as in-store visits and in-app purchases, brands may need to plan to spend twice as much. That’s because there are fewer data points further down the funnel, so brands will need to spend more money to get the same amount of data as they would when measuring the upper funnel.
 
6. Fact-check using third-party vendors or designated in-house measurement teams.

Vendors such as iSpot and TVSquared can provide an additional source of validation of the numbers brands get.

“If a brand has a data science team, they should be able to develop a more sophisticated method of understanding the impact of TV advertising on sales and the other channels a brand operates,” Papiu says. They also can help brands go next-level by assessing the impact of individual creative assets, dayparts, networks, and even programs.

The key is to acquire sufficient volume and quality of data over time to build and test a model, and then collect enough data to pave the way to optimization.

Common Mistakes to Avoid

There are several assumptions brands often make about implementing the steps outlined above that can significantly impair the reliability of their attribution measurement.

There are no shortcuts to ensuring accuracy; brands will want to be mindful of some potential pitfalls:

  •         Paying too much for impressions. The best solution is to get impressions for the minute of each spot, as opposed to the program hour. Not only does the minute-level data provide more accurate pricing, it also will deliver a more accurate number of attributed visits or attributed purchases.
  •         Waiting too long to optimize. Post logs can take anywhere from a few weeks to a few months, but this doesn’t mean brands can’t take into consideration mid-campaign insights from providers with access to the Nielsen panel. With providers that have the infrastructure to process ad detection or Nielsen audience insights on the fly, brands can begin tracking the funnel and about 80 percent of spot attribution mid-campaign, and then act on what they learn.
  •         Placing a pixel on only one site page. Because viewers cannot tap the screen to visit a brand’s site or app, advertisers need to account for different search engines and user interests that may send viewers to any number of different web pages. For that reason, marketers should throw the net wide and pixel the entire site. This will give brands a more accurate understanding of the relationship between spot conversion and web traffic as well as which content visitors like most.
  •         Thinking it’s too late to start measuring campaign performance. Just because a campaign has already started doesn’t mean it can’t be tracked. It may be too late to build a model from scratch, but a handful of vendors have the ability to place a tracking pixel on a site and provide real-time reporting capabilities and access to Nielsen, even if they’re not running the media. This can even be done retroactively, since a few providers can run post-campaign analysis by collecting ISCI codes and post logs from a past campaign.
  •         Not knowing how a vendor’s attribution model works. If a brand wants a vendor to build a TV attribution model for them, there are several companies that can help. Brands should make sure they know how a third-party vendor’s attribution model actually works. This includes understanding how a vendor sets the baseline and upper bound. This will help ensure that brands can trust that the results they receive are, in fact, attributable to their TV campaigns.

Building a dependable attribution model is equally important for both new-to-TV and experienced TV advertisers, if they have any intentions of learning and improving how effectively their campaigns attract and convert new customers. Investments in attribution also can help them determine if their results are being watered down by overly generous models.

At the end of the day, and at the end of a campaign, successful attribution on TV comes down to three things: methodology, access, and controls. If a provider, agency, or advertiser has the right statistical methodology in place, access to reliable measurement tools, and a constant but adaptable baseline, their attribution can be trusted, and they can let TV do what it does best: reach and create more customers.

Matt Collins is the SVP of marketing at Simulmedia, a partner in the ANA Thought Leadership Program.

 

 

Skip to content