Measuring News Consumption in a Digital Era

By Michael Barthel, Amy Mitchell, Dorene Asare-Marfo, Courtney Kennedy and Kirsten Worden

The news media’s transition to digital has brought major upheaval to the industry – including a multitude of new providers and ways to get to news. And just as American news organizations have had to drastically reevaluate their business models, it would make sense that researchers who are trying to measure the U.S. public’s news consumption also need to reexamine the traditional ways they have done so.

In the mid-20th century, when media research came into its own, this task was more straightforward. There were only a few different ways to get news, and all were clearly distinct – print publications, radio or television. But over the past decades, in addition to a plethora of new forms of news (from 24-hour news channels to news websites), many news outlets no longer stay confined to producing content on only one platform. For instance, to meet the growing digital audience, newspapers like The New York Times also produce audio podcasts, which can be heard on radio stations through a smart speaker, and video series, which can be seen on a cable TV network through a streaming device (such as a Roku or Fire Stick). And cable news outlets and other news providers have an active presence on Facebook, YouTube and other social media sites, further blurring the line between platforms. Finally, there is an industry-wide concern that news consumption habits are overestimated in surveys where respondents self-report their behavior.

Given the increasing complexity and interconnectedness of this news landscape and concerns around overreporting of news consumption, Pew Research Center wanted to explore how best to measure news consumption: Where do currently used survey practices still work and where might changes be in order?

This report is the culmination of this effort and is organized into three sections: Chapter 1 looks at the U.S. public’s familiarity with newer concepts related to news; Chapter 2 examines possible ways to improve survey-based measures of news consumption; and Chapter 3 compares survey results to the use of passive data that comes straight from tracking software news consumers downloaded to their digital devices.

Americans are largely familiar with new technologies but often don’t think of them as news sources

Americans are familiar with new digital platforms, but few use them for newsIn the survey of U.S. adults, there is mixed evidence about the public’s understanding of newer forms of media and news, which has an impact on the topics survey researchers can reasonably ask about. U.S. adults are broadly familiar with technologies like streaming devices or services, podcasts and news alerts. At the same time, though, many do not seem to use most of these for news consumption, and results from the cognitive interviews suggest that many do not even think of these new forms as ways to get news.

Additionally, as news consumers navigate an information environment that includes news aggregators and social media feeds, confusion abounds regarding the original source of reporting. Only 9% of U.S. adults are very confident that they can tell if a news organization does its own reporting, and, when asked to identify which of six sources do this (See Chapter 1), nearly a quarter (23%) could not identify any of them correctly.

Finally, in an era of rapidly changing business models for news organizations, this study finds a need for survey researchers to carefully specify what they mean by “paying for news.” When asked generally if they pay for news, many people do not seem to think of specific ways that they do pay for news – not to mention the large chunk of Americans who indirectly pay for news, such as through a cable TV subscription.

Possible ways to improve survey questions about news consumption

The findings reveal that, while there is no “silver bullet” for perfect survey measures of news consumption, a series of refinements could drive marginal improvements – such as around the goal of reducing overreporting.

The study tested a number of concepts, including adding a reference period – e.g., “In the past week, how many days did you get news from …” – or examples – e.g., “Daily newspapers (such as The New York Times, Wall Street Journal, or your local daily paper)” – to core survey questions about news consumption. The study found that these two changes largely do not affect estimates of news consumption among the U.S. public overall, although they may make important differences for specific platforms. For instance, a specific reference period appears to get more accurate measures of radio consumption, and examples may help respondents to better understand what is meant by national network TV outlets, which were often confused with cable TV news in cognitive interviews.

Moreover, the study finds that, when asking about how often people consume news, showing the response options in low-to-high order (i.e., starting with “never” and working up to “often,” rather than the reverse) produces no significant differences on individual items but did show a pattern of generally lower estimates of news consumption. And while there is a close correspondence between respondents saying they get news “often” or “rarely” and saying they do so a specific number of days per week, a response of “sometimes” is used to indicate a wide range of news consumption habits. In other words, to one respondent, “sometimes” can mean once a week, and to another, it could mean three times a week or more.

An exploration of the potential to use passive data, gained from software people download to record their activities online, as a direct measurement of the public’s digital news habits – free of the concerns with self-reporting inherent to surveys – shows some promise. Yet there are still too many pitfalls to rely on it for a complete portrait of Americans’ digital news consumption. Estimates coming in from passive data are systematically lower than those from survey questions, with inadequate coverage of devices being one apparent culprit: Most of the respondents who agreed to have their news consumption tracked said that they had additional devices that were not being tracked, and so some of their news consumption was likely not captured.

That is not the only possible issue with passive data, which generally cannot track in-app news consumption (e.g., when someone taps on a link to a news story within a social media app). And a similar measurement from a commercial metrics provider comes in even higher than the estimates from the survey data. This points to one strength of the survey approach: its sources of error are consistent, well-studied, and widely understood, while the sources of error in passive data are, at present, unclear, dependent on the specifics of data collection, and difficult to adjust for.

Survey-based measurement of news consumption is not without its own problems – perhaps foremost among them is people’s tendency to exaggerate their news consumption, consciously or not. The study finds strong evidence of this: Many Americans say that following the news is “very important” to being a good citizen, and those who say this are more likely than others to overestimate their news consumption when their survey responses are compared with passive data tracked on their devices. This suggests that following the news is seen as a “socially desirable” behavior by many people, which may lead them to think aspirationally about their news consumption – i.e., how often they ideally intend to consume the news rather than how often they actually do – when answering survey questions about it.

When it comes to measuring news consumption, tracking respondents’ digital devices does not capture all of their online activity

Still, overall, this yearlong research effort reveals the continued value of survey research – both in and of itself and compared with other options – and indicates ways to further improve data quality. The strength of survey research stands out in particular for the purpose of providing comprehensive and comparable tracking of the public’s news consumption habits over time and capturing a representative slice of the full U.S. adult population as well as demographic subgroups. Further, surveys allow the measurement of multiple different forms of news consumption (not just digital) in the same way, at the same time – and across time. Passive data has useful applications in the consumer world and can be a tool for publishers and others who want a fine-grained picture of user behavior. But the data does not, at present, seem well suited for high-level estimates of news consumption.

It is worth noting that Pew Research Center’s own organizational expertise in survey work may incline its researchers toward a more enthusiastic endorsement of that methodology. But the Center has also long explored and produced news consumption research using other types of data collection, such as tracking the social media habits of a representative sample of U.S. adults, tracking activity in public social media spaces around certain topics, studying aggregated search behavior around news events and making use of commercial metrics. The Center, particularly in the area of news research, looks forward to continuing to explore new data opportunities and further developments of those that have already become available. As we put it on our website: “We continue to search for ways to expand and strengthen the traditional methodologies that underlie survey research and to explore the potential of alternate methods of conducting surveys and measuring public opinion.”

Data sources and methods

This study took a multimodal approach to investigating these questions, drawing on cognitive interviews, split-form survey experiments, comparisons between passive data and self-reported survey data and a full, nationally representative survey. The details of each are provided briefly below.

After an initial round of brainstorming and testing, the formal process began with cognitive interviews conducted among 21 respondents through RTI International. The aim was to get qualitative feedback on the proposed survey questions and to gain some preliminary knowledge on the public’s understanding of emerging concepts around news consumption in a digital age. After RTI staff conducted an expert review of the questionnaire, respondents took a draft version of the full news consumption questionnaire and were probed to talk through their responses, along with some specific probes asking about their understanding of key concepts. These results are included throughout the report for additional context for some findings.

Survey experiments were then conducted on two separate Ipsos KnowledgePanel surveys in April 2020, with roughly 1,000 respondents per survey split randomly across two different forms. The aim was to test different approaches to measuring news consumption (e.g., half were asked how often they get news on television, and half were asked how often in a typical week they get news on television) and to determine which version of certain questions would best reduce the overall incidence of reported news consumption, in light of research that has identified potential overreporting of news consumption in surveys.1 These results can primarily be found in Chapter 2.

Finally, 3,715 members of Ipsos’ KnowledgePanel responded to a custom national survey fielded June 2-11, 2020. Approximately half (N=1,694) had previously consented to have their digital activity tracked on one or more devices. This passive data was compared with their self-reported data from the survey. For instance, they were asked if they used the website or app of The New York Times in the past week, and this was compared with the records of their digital activity. In addition, these passively monitored panelists were compared with the general population sample to help understand the potential for using passive data to measure news consumption. These results can be found in Chapter 3.

The remaining 2,021 respondents were a nationally representative general population sample of U.S. adults who completed the survey, from which data is mainly being used for general point estimates and over-time trend comparisons. Their results can be found in Chapter 1. All respondents took the survey online. Home internet access was provided to adults who did not previously have it during panel recruitment.

To continue reading, CLICK HERE.

 

Skip to content