A Snarketing post by Ron Shevlin, Director of Research at Cornerstone Advisors
Management ideas come and go.
Unless we’re talking about the net promoter score, which has come, but hasn’t left. It’s the cockroach of management metrics.
For the life of me, I can’t understand the continued interest in this metric, or as some delusional people call it, a system.
Intention to do anything — let alone to recommend a company one does business with, is useless.
Go ahead, tell your senior management team that, on a scale of 1 to 10, that your intention to increase revenues and profitability this year was a 10. Unless you actually grew revenue and profitability, don’t hold your breath waiting for a good bonus.
But it’s easy to understand and inexpensive to implement, according to net promoter groupies.
Go to hell. That’s pretty easy to understand, too. Not very helpful in actually getting you there, though. Utility — how helpful a metric is at helping make management decisions — is the criteria for implementing a management metric, not how cheap it is to implement, or how simple the definition.
The utility of NPS just isn’t there anymore. There are better metrics out there. This post is about one of them: Referral Performance Score.
Zendesk recently asked consumers “How do you show loyalty to the firms you do business with?” The top answers: Providing referrals and buying more.
If providing referrals and buying more are the top ways in which consumers show loyalty, why would you measure anything else, let alone “intention” to refer? And why wouldn’t you measure and track actual referrals?
Bottom line: Financial institutions should stop wasting their time with the net promoter score, and start tracking and measuring actual referrals.
And they should go one step further, and start measuring the Referral Performance Score.
It’s very simple to calculate (if that’s your criteria for a metric): Multiply the percentage of customers that refer by the percentage of customers that grow their relationship.
Based on Aite Group research, FIs (banks and credit unions) increased the percentage of consumers that provided a referral in 2012 to 39% from 36% in 2011.
Even more impressive is that the percentage of customers that increased the number of accounts held with their primary FI grew from 10% to 16%.
Overall, the industry’s RPS increased from 353 to 547.
As a group, credit unions really stood out in this year’s RPS calculations. Although they didn’t expand the percentage of their members that referred the CU by much, they did significantly increase the percentage of members that added accounts.
If you were a credit union executive, which would you rather know: a) How your CU compares to other CUs in terms of % of members that referred the CU and the % of members that grew their relationship, or b) How you CU compares to other CUs in terms of % of members that intended to refer the CU on some subjective scale?
If you answered B, please leave this site.
And if you were a bank or CU exec, how would you know if your marketing efforts were paying off? The number of new customers is certainly a good measure, but in a down economy the focus may be on growing the relationship with existing customers.
But not all customers will be in the market for new accounts. If not, getting referrals from them is a great way to grow the business.
I don’t dispute that measuring and tracking referrals is going to take some effort and investment. But if it’s the most important way that consumers show loyalty, isn’t it worth the effort?
By doing so, your FI creates an ability to calculate RPS for all customers, continuously. And not just periodically, and for a sample of customers, which is what NPS or even customer sat scores are going to give you.
The ability to slice and dice RPS for different customer segments (e.g., product ownership, demographics, etc.) makes the RPS a far more valuable and flexible metric than any survey-based metric.
It’s time to kill that little cockroach of a management metric, the net promoter score.
Note: For those of you mathematically inclined, the “2013” stats cited refer to the period of Q2 2012 thru end of Q1 2013, which I why I referred to the 2013 numbers as 2012 behavior. I should’ve been a bit clearer on this.