Dramatic Accuracy Differences Among Vendors Of Online Research.

A Stanford University study comparing findings from top Internet research providers to Census data and other benchmarks shows that, for many data points that are critical to marketers, Knowledge Networks produced results that are dramatically more valid and reliable. By contrast, other well-known online vendors produced data that could lead to mistaken conclusions about consumer choices and marketing ROI.

The Stanford project compared seven of the best-known Internet research panels – GoZing*, Greenfield Online, Harris Interactive, Knowledge Networks, SPSS, SSI, and SurveyDirect – to a high-quality telephone study by SRBI and benchmarks from the Census, Centers for Disease Control, U.S. Department of State, and elsewhere. Results were for identical or nearly identical questions.

Comparisons show that Knowledge Networks – which maintains the only online panel based on statistically valid sampling rather than volunteering – hewed much more closely to the benchmarks. KN’s overall error rate for key demographic, attitude and product usage points was 3.9%; in fact, it outperformed the telephone study, which had a 6.0% error rate. (See Chart 1.) Greenfield posted a 7.7% rate – nearly double the KN figure – and Harris scored 6.5%.

“Better business decisions are grounded in accurate measurement,” said John Lewis, President and CEO of Knowledge Networks. “Marketers have to know that the data they use to make decisions is reliable for a given business issue, and that it will hold up over time; this study clearly shows that reliability can vary dramatically from vendor to vendor. We are proud that the Knowledge Networks Panel is the only statistically valid online panel, and that our clients can use our information without hesitation to answer important business questions, such as what products consumers are choosing, or the ROI on a certain marketing decision.”

The study used a disparate group of variables – some chosen for their direct comparability to Census data and other benchmarks – as proxies to get at the issue of data accuracy on an apples-to-apples basis. The pattern of superior Knowledge Networks reliability, when compared to benchmarks, held true for many of these illustrative metrics, including:

Membership in a frequent flyer program: KN (22.6%) was by far the closest among online firms in replicating the benchmark of 17.8%; the other online vendors averaged 32.7% – nearly double the benchmark. (See Table 1.)

Smoke every day or occasionally: KN (24%) came closer to the Centers for Disease Control (CDC) benchmark of 21.6% than the SRBI telephone survey (25.7%). The average for other online vendors was 29.7%.

Have current driver’s license: KN was within a percentage point of the U.S. Census/Statistical Abstract benchmark (89.1% benchmark vs. 88.9% KN), while all other firms averaged 94.2%.

The study also looked at many data points for which no benchmark exists; if we use the high-quality telephone data as a benchmark in these cases, we see additional proof of the difference between Knowledge Networks and online firms that rely on “volunteer” respondents. For example:

volunteer samples overrepresent use of coupons by a factor of 100% in some cases; while the KN (49.6%) and telephone (45.7%) data show that nearly half of respondents do not use coupons in a typical week, the volunteer firms came in with a much lower average estimate of 26.7%.

volunteer samples overrepresent those consumers who consider themselves very comfortable with computer technology; while the KN and telephone surveys arrived at statistically identical estimates of about 48%, the average for volunteer firms was 78.6% who consider themselves technologically adept.

In addition, the Stanford research shows how low in-survey cooperation rates dramatically alter the effective panel size of all-volunteer panels. (See Chart 3.) The Knowledge Networks in-survey cooperation rate was 73%, compared to 2% for Greenfield – meaning that 98% of those who received the Greenfield survey did not complete it. The importance of panel size has at times been highly overstated; it is clear from these results that large panels may introduce greater bias, which renders survey results less reliable.

Skip to content