5 things the Wall Street Journal got wrong about NPS®

Net Promoter® Score (NPS®) – an increasingly popular measure of customer satisfaction – originally gained traction in a December 2003 Harvard Business Review article The One Number You Need to Grow, written by former Bain Consultant Fred Reichheld. Since then, NPS has consistently gained in popularity alongside Customer Experience (Cx) as more and more businesses adapt to competing in a world of service transparency, where product and service quality become the true differentiator in the minds of buyers. In fact, a little research on Google Trends yields very clear evidence that both Net Promoter and Customer Experience have risen significantly as topics over the past 15 years.

A recent article on the ‘cultlike following’ of Net Promoter Score (NPS) amongst corporate leaders from the Wall Street Journal, titled The Dubious Management Fad Sweeping Corporate America, brings the corporate adoption of this metric to the forefront. I highly suggest you read the full article (subscription required), as its authors Khadeeja Safdar and Inti Pacheco have clearly done their homework. And while the article is well researched and well written, here are 5 things they got wrong (and one they got right) about the Net Promoter system.

5 things the WSJ got wrong on NPS

  1. NPS methodology is more than just the score. Most people debate the efficacy of the question (likelihood to recommend) and the scale (0-10 scale, translated to a score from -100 to +100) and miss the biggest impact NPS has had on customer feedback. Prior to NPS, satisfaction surveys were often 20-30 questions long, requiring customers to spend 10 minutes or more to provide their feedback. After weeks in the field, a team of researchers took additional time to distill the findings into a report. By reducing it to a single scaled response and less than 10 questions, the time lag between feedback and action drops dramatically, making what used to be a marketing-only tool a truly effective tool for client service teams, operations leaders and the front lines of business service firms.
  2. Low response rates are helped, not hurt by NPS methodology. The article mentions response rates of 5% and while that may be a true benchmark in some industries, more typically satisfaction survey response rate benchmarks report a range from 10 to 20 percent. At ClearlyRated, one of our greatest services to our clients is helping them average a response rate of more than 30 percent, on average. You can read my colleague Kat Kocurek’s post on best practices to increase response rate here. In every instance where we have transitioned a client from a 20+ question survey to an NPS methodology, the response rate has improved, often more than doubling what their prior response was. In a B2B environment, when each dissatisfied client may be worth tens of thousands of dollars (or much more) to the firm, each response – good or bad – is a gift. A slight decline in precision for a significant increase in responses is a trade most firm leaders readily make.
  3. NPS is a proven leading indicator of growth – at least in many industries. The article references an academic study that found NPS ‘doesn’t correlate with revenue or predict customer behavior any better than other survey-based metric’. There are two problems with this statement. First, given the simplicity and efficiency of employing the NPS methodology, the bar should be higher. An argument against a well-designed and executed survey program that employs NPS, should also include an alternative that better predicts customer behavior and firm growth, while still being easily communicated within and outside of the organization. Without an alternative metric to gauge customer satisfaction that is demonstrably better (at predicting behavior, driving change, and improving service delivery), it is hard to argue against NPS. 
    Second, in the space we operate (business and professional services), we have proven the correlation many times over the past decade. Our most recent study, including nearly 4,000 clients’ spend over 3 years with two large recruiting firms found that detractors were 56% less likely to order in the next 12 months, and even if they didn’t outright leave the firm, on average they decreased their spend by 14.1%. Promoters increased an average of 8.2% in the following year. In the accounting industry, corporate buyers are nearly 4x as likely to start their search for an accounting firm with a referral than any other source. ClearlyRated research in B2B service industries shows a similar pattern in other industries such as law, insurance and staffing. For industries, like these, where referrals drive growth, asking clients their likelihood to refer makes intuitive sense – and predicts buyer behavior.
  4. Transparency around NPS scores is a good thing – for investors and buyers. The article’s analysis of more than 40,000 transcripts of conference calls discussing quarterly earnings underpins the argument that adoption has rapidly increased – and they are right. However, the tone of the article indicates this is a bad thing. While I agree that most leaders of publicly traded companies are using it when the scores indicate positive direction at the organization, we should all be demanding more transparency, not less. A study by the Pew Research Center found ‘82% of U.S. adults say they at least sometimes read online customer ratings or reviews before purchasing items for the first time, including 40% who say they always or almost always do so. While consumer goods and retail establishments understand how impactful reviews can be on their business, in B2B industries, where decisions aren’t measured in dollars but rather thousands of dollars, more firms measuring (and sharing) the satisfaction of their existing clients is a good thing. In the B2B space, this is changing too. Glassdoor has brought transparency and accountability to how companies treat their employees, while G2 is redefining the B2B software buying experience with their in-depth ratings and reviews. More information on actual service from actual customers is a good thing. Period. 
    And if you still don’t think it is a useful tool, check out the chart below comparing the growth of the five publicly traded companies who mention NPS the most (according to the WSJ article) compared to the S&P 500. Perhaps instead of lamenting their implied overuse of the metric, we should be applauding the shareholder returns they’ve created by building a culture around customer experience.
  5. NPS can be a powerful motivator for employees – if used correctly. The WSJ article cites examples of manipulation of NPS scores by Best Buy employees, amongst other allegations of misuse centered around leveraging NPS for bonuses and performance reviews. These issues are more indicative of misalignment of incentives, and actually have little to do with NPS as a metric. The same issues would occur if the incentive was based on 5-star reviews or any other CSat metric. Our counsel has always been to focus on the service wins and improvement, much more than laboring on service failures or just NPS score.The ClearlyRated survey program provides clients a chance to recognize employees who have gone above and beyond, and shares that recognition throughout the firm. Many employees at our client organizations have come to both depend on and look forward to the survey feedback, as it provides them with validation that what they do matters. It helps them see the difference they are making in the lives of the people they serve for 40+ hours each week. My point is this; if your employees fear feedback from the customers they serve, you have undermined the most valuable aspect of your survey program. It should inform and inspire front-line staff, not demoralize them. That holds true of any methodology, including NPS.

While I’m quick to point out the areas where I disagree with the Wall Street Journal article, it’s also important to point out one area where I absolutely agree with them. The NPS scale is an imperfect statistical metric. Since its inception, academics and researchers alike have (correctly) pointed out that calculating a score from -100 to +100 from an 11 point scaled survey response introduces ‘noise’ to the score in the form of additional variance. What it doesn’t say is that EVERY metric and scale has issues. Whole books and HBR articles have been written on the flaw of averages. What statisticians fail to capture with NPS and why operators tend to love it, is that this statistical flaw actually can, at times, be helpful operationally as it reinforces the lesson that EVERY interaction matters. Having a single number, and the right context and benchmarks for comparison, can inspire action on the front lines of customer service; a transformational impact few research methodologies can claim.

In summary, what I think the original NPS design got right was trading simplicity for a slight decrease in precision. Is it perfect? Far from it. Is it effective? Within B2B service industries, where service is the primary differentiator, absolutely.

Interested in implementing a Net Promoter® survey at your firm? Contact ClearlyRated today!

Prioritize DEI at your firm with ClearlyRated's Employee Survey

The only employee survey that drives meaningful progress on diversity, equity, and inclusion.