Customer Service in the Age of ChatGPT: Don’t Take AI Too Far in the Call Center

Everyone has a story about a frustrating experience with a customer service department. We hate when humans act like robots, so could artificial intelligence-based innovations like ChatGPT really provide customer service in a call center? And if so, how could it help or hurt the banking experience?

When ChatGPT burst into public consciousness in late 2022, it signaled the next round of the “People vs. Machines” debate — always a lively discussion in the customer service space. Who wouldn’t want to skip the wait for a call center representative?

We don’t know for sure what’s ahead of us, but we do know what’s behind us. If history is a guide — and I think it is — we should leverage past experience when considering how to blend technology and humanity most effectively.

Tech innovations introduce new efficiencies and, understandably, companies sometimes try to extend that value beyond its natural limits. It’s tempting to conclude that a new tool can sweep aside the shortcomings of human agents without significantly impacting service quality. Offshore call centers and early online chatbots come to mind. Each has advantages, but also limits that we should recognize and learn from.

The Potential to Upgrade the Customer Service Experience

We take a very high level of customer experience for granted — where it takes only minutes to order a product and hours to receive it on our doorstep. In contrast, customer service is far behind, remaining slow and frustrating.

Rapidly maturing artificial intelligence technology like ChatGPT can change that, but financial institutions need to understand what these technologies are (turbo-charged data-processing prediction machines) and what they’re not (blanket substitutes for human intelligence).

AI is really just a model that is very good at making predictions. It responds to requests by searching through available data and identifies positive outcomes from similar situations that occurred before. It then digs deeper to determine what conditions or actions preceded those positive outcomes. This intelligence can be applied to the current request and predict what conditions or actions will produce a positive outcome now.

Over time these machine learning models catalog enormous quantities of data, deepening their source pool and strengthening the reliability of their predictions.

Learning, however, also requires failure, and a lot of it. Even when AI has enough training data to respond correctly most of the time, it may not fully address the intention or context of a question.

Webinar
REGISTER FOR THIS FREE WEBINAR
How Banks Are Fortifying Their Data Against Increasing Cyber Threats
This webinar from Veeam will detail the value of working together across your organization to be better prepared in cyber defense and response readiness.
WEDNESDAY, April 24th AT 2:00 PM (ET)
Enter your email address

The Familiarity Factor in Customer Trust

The fact that AI can feel off is no small issue.

Banking has seen this before in the early 2000s when some financial services providers joined consumer goods manufacturers and others in shifting their call center operations to countries where the labor force speaks English as a second language. The call centers cost less, but customer trust suffered — not because of poor performance, but because of human biases. Studies show that people subconsciously discount information from others who sound “different.”

Trust has always been the currency of customer service. More recently, companies using “conversation bots” that can be programmed with region-specific accents have experienced higher first-call resolution rates.

But no matter how familiar their accents can be made to sound, conversation bots are still discernably non-human. Just toss a curveball into the conversation and watch your bot unravel. As was the case for earlier chatbots, conversation bots are great at dispatching simple self-service requests, but they’re a poor substitute for live agents in nuanced situations that require judgment.

Read more:

Judgment Matters in Customer Service

Judgment — a uniquely human capability — is the core of customer service. We know it when we see it, and also when we don’t. So far, early engagements with ChatGPT have demonstrated the technology’s potential to alarm rather than reassure.

For example, an in-depth interaction with Microsoft’s new AI-powered Bing search engine recently left a New York Times journalist feeling “deeply unsettled and even frightened.” The journalist likened the jarring experience to an exchange with a “cheerful but erratic reference librarian” blended with a “manic-depressive teenager,” concluding that it was “not ready for human contact … or maybe we humans are not ready for it.”

This example and others indicate that new AI-based tools are still works in progress, with bugs to work out and boundaries to define before most of us will accept them as legitimate and trustworthy.

I don’t mind using a chatbot to book a hotel room, because all I need is efficiency and chatbots are very efficient for that sort of transaction. But recently I booked a hotel room through what turned out to be a fraudulent website, and when I arrived with my family at midnight to check in, the hotel didn’t recognize my bogus confirmation number. We were in town for a major event that drew thousands of other people, so hotel rooms were scarce, and we were really stuck.

I immediately called my credit card provider’s customer service number to cancel the fraudulent charge and then I called around to find us another hotel room.

“New AI-based tools are still works in progress, with bugs to work out and boundaries to define before most of us will accept them as legitimate and trustworthy.”

For these calls I bypassed the chatbots and went straight for live agents, whose uniquely human capacity to understand the context and urgency of my family’s situation would lead them to treat my demands with empathy and flexibility, which are completely absent in chatbots.

I needed to know that I could trust my credit card provider to accurately process the details and reverse the fraudulent charge to my credit card. AI is not (yet) reliably good at accuracy, and it has no established credibility when it comes to trust.

Read More:

Blending Technology and Humanity in Call Centers

My credit card emergency illustrates a critical point. Consumers want efficiency, accuracy and trust, but not always at the same time and not always in that order. Circumstances dictate which is a priority in any given situation.

As AI matures, contact centers should continue to route transactional and computational tasks to chatbots and reserve more complex requests for human agents. They should also leverage AI’s awesome capacity to provide stronger support to help agents manage those higher-stakes interactions.

Huge benefits lie ahead, as long as we deploy AI to work in conjunction with, rather than in place of, human agents.

About the author:
Matt McConnell is chairman and chief executive officer of Intradiem, a provider of intelligent automation solutions for customer service teams.

This article was originally published on . All content © 2024 by The Financial Brand and may not be reproduced by any means without permission.