I hope that upon reading the title of this post, your first thought was: Better for what?
Academics from a couple of European universities conducted a multi-industry study to determine which of three customer feedback metrics–satisfaction, NPS, and customer effort score–was the best predictor of customer retention. Their findings should be of interest to bank (and credit union) marketers.
According to the study’s abstract:
“This study systematically compares different customer feedback metrics (CFMs)—namely customer satisfaction, the Net Promoter Score (NPS), and the Customer Effort Score (CES)—to test their ability to predict retention across a wide range of industries. Overall, we find that the top-2-box customer satisfaction performs best for predicting customer retention. In addition, our results show that the CES in itself has little to no predictive power and performs the worst of all CFMs studied. However, the best CFM does differ depending on industry.”
The study also found that an increase of one-point in an individual customer’s sat score or NPS isn’t likely to change the likelihood of that customer sticking around as much as a one-point increase in the overall company’s average satisfaction (or NPS) score will increase the firm’s overall retention rate.
Maybe I’m missing something here, but that seems pretty intuitive, considering the number of customers that would have to increase their rating in order for the company as a whole to see a one-point increase.
The authors argue, however, that one implication of this finding is that:
“A customer at a high-scoring firm has few alternative companies to do business with and therefore is less likely to churn, even if he or she is relatively unsatisfied.”
Translation: If your bank or credit union is truly one of the high-performers in terms of customer satisfaction (or NPS), then your customers with the lower ratings may still be less of an attrition risk than you think. The study doesn’t go on to say this, but that could mean that your lower-rating customers may actually be less of an attrition risk than the middle- (or possibly higher-) rating customers at another bank that doesn’t have as high of an overall customer sat or NPS score.
The banking-specific results are interesting, as well.
Consistent with the overall findings, the study found that–at the customer level–top 2-box satisfaction was the best-predicting metric for the banking industry, followed by the CES score.
The findings regarding NPS are mixed. The study looked at two different approaches to calculating NPS: 1) The “official” score (promoters minus detractors), and 2) The NPS value (the average score on the 10-point scale).
As a predictor, the NPS value was statistically significant, but not as strong a predictor as customer sat or CES. The official NPS score was not a statistically significant predictor, however.
But here’s the kicker: At the firm level, for the banking industry, not one of the three metrics were statistically significant predictors of retention.
What this means is that while an individual banking customer’s sat score may be a predictor of his or her likelihood to stay with the bank, the bank’s overall satisfaction score was not a particularly good predictor of the bank’s overall retention rate.
And the same holds true for NPS and CES.
Smart managers address (at least) two questions before implementing a performance metric: 1) Why are we using this metric instead of some other metric? and 2) What are we going to do with this metric after measuring it?
Based on the research conducted by the European professors, you have to wonder if banks and credit unions using the CES bothered to answer these two questions. Or if the FIs using NPS answered them either. Seems to me that many folks in the industry (and other industries, for that matter) bought into the concept of NPS without a solid theoretical base for doing so.
But I think we can shoot a few arrows into the European study.
All three metrics evaluated are based on consumer attitudes, and captured via a survey.
This opens up questions as to how representative the samples were, but I’m willing to let that slide, because I think there’s a bigger issue. That issue is that a survey can only capture a respondent’s attitude–whether it’s satisfaction or likelihood to refer–at a single point of time.
There are so many factors that could influence someone’s response at a particular point of time. Did they just have a great/terrible experience with the firm being rated? Did they get into a car accident that morning and hate everybody at the moment? You get the picture, and can come up with another 100 reasons that could skew attitude at a particular point in time.
Bottom line: All of this leads me to my longstanding rants that Behavior Trumps Intentions and Behavior Trumps Attitudes.
This is why I’m advocating for a new customer feedback metric–one that doesn’t actually require asking the customer for any feedback, but looks at what they do–their behavior–to predict their likelihood to stick around. I’m not going to belabor the details of the Referral Performance Score here. There are other posts for that.
Whether you adopt RPS or not is fine. But smart marketers should take another look at their existing customer feedback metrics and question whether or not they’re predicting what they think those metrics are supposed to predict.