Advocacy Drives Growth…Or Does It?

The following is reposted in its original form. The stimulus for doing so was provided by this article from Ken Roberts titled, “One Final Time – NPS is Not a Lead Indicator of Growth!” Roberts provides exactly the compelling proof that I was looking for when I wrote this post back in 2006. And let’s hope that the new proof will help reduce reliance on a misleading metric.

Original post

I don’t like it when people take things at face value. I am particularly annoyed when I find that I have done exactly that. Lulled into a false sense of security by big brand names like “Harvard” and “London School of Economics” I bought into the fact that the Net Promoter Score might be a leading indicator of business performance. So, this is my chance to remind people of the dangers of taking things at face value.

Earlier this year Millward Brown published a Point of View on the topic of Word of Mouth, written by yours truly. In the POV I stated, “The London School of Economics Advocacy Growth Study 2005 confirmed previous work by Frederick Reichheld in the United States, finding that “Word of Mouth advocacy is linked to company growth in the UK; the more brand advocates you have, the higher your growth.” At the time I should have listened to one of my colleagues who suggested that the validation of NPS against business performance was subject to debate. It was only later that I realized why.

The original work was published by Reichheld in the Harvard Business Review under the title, “The One Number You Need to Know”. The article reviews work done by Reichheld and colleagues at Bain to identify and validate a measure of consumer loyalty. Their Loyalty Acid Test matched the responses to 20 possible loyalty statements with the subsequent actual repeat purchasing and claimed referrals of the same respondents. With information on 4000 customers, they built 14 case studies where there was sufficient base size to measure the linkage between statements and behavior. The winning statement, “How likely are you to recommend (company X) to a friend or colleague?” ranked first or second in 11 of the 14 studies.

Subsequently, in Q1 2001 a company called Satmetrix started tracking the “would recommend” scores of thousands of people in more than a dozen industries. The brief survey asked people to rate companies with which they were familiar. The results from these questions were correlated with growth rates for the companies asked about. Reichheld states, “Research shows that, in most industries, there is a strong correlation between a company’s growth rate and the percentage of its customers who are “promoters” – that is, those who say they are extremely likely to recommend the company to a friend or colleague.” (In fact, the score used was the now famous “Net Promoter Score” (NPS), where the proportion of negative promoters are subtracted from positive promoters.)

This sounds very credible. A pilot test, lots of people interviewed, and results validated against independent measures of business performance. There is just one small fact that I had not previously realized. The dependent variable for the analysis was the company’s average growth rate over a three-year period. That three-year period not only included the survey period but preceded it by at least one year (1999 to 2002).

Let me ask you, does that not sound just a little odd? The article does not misstate the facts. The NPS score does correlate to the growth rates used in the analysis. But that analysis does not prove that NPS will predict future growth rates. In fact, a legitimate argument would be that the NPS score was the outcome of business performance which preceded the time of the surveys. Sound business practices and a great product or service have always been requirements for growth. Maybe both the growth rates and the willingness to recommend reflect that fact? We cannot tell from the article.

What I can say with absolute conviction is that the LSE Advocacy Growth Study does nothing to confirm that the NPS is predictive of business success, in spite of claiming, “For the brands in the survey, a one per cent increase in NPS equated to £8.82m additional sales, with a seven per cent increase yielding one per cent additional brand growth.” This sounds very conclusive until you realize that the NPS survey was conducted in 2005 but the growth rates were derived by comparing 2004 revenues with 2003. Again, we cannot tell from this data whether the NPS score follows business performance or leads it.

Over the years I have been involved in several exercises to validate Millward Brown’s research measures against business performance. These include validating Link pre-test results to short-term share change and the BrandDynamics’s Voltage score to annual market share change. In both cases, however, the validations have been forward looking. For Link it is the share increase or decrease in the 8 weeks following the pre-tested ad going on air. For Voltage it is the annual brand share in the year following the survey. In our business there is no point in predicting history, it is future performance that matters.

Maybe the annoyance I feel as a result of taking the Harvard and LSE data at face value is because it seems that different standards of proof are required of big academic brands and big market research brands. Both the academic brands seem to find the correlation with previous business performance to be convincing proof of the value of NPS. I do not. Nor I suspect would many of our clients if they realized that the validation was backward looking not forward looking.

In closing this post let me be clear. I am not saying that NPS is useless or invalid. Whether or not someone is says they are willing to recommend your company or brand is a good measure of their satisfaction and predisposition. But is it really the one predictive number that you need? Sorry, from my point of view the jury is out until we see more compelling proof that it leads business performance rather than following it.

By Nigel Hollis

Skip to content