Chase’s AI Chief Explains Why the Biggest Banks Will Win the AI Race (Probably)

Chase's Teresa Heitsenrether thinks the AI revolution will be won by the incumbents with the deepest data troves that can fuel the latest tools. Risk management remains a key task, but one that existing frameworks can adapt to.

OpenAI launched ChatGPT in its initial form at the end of November 2022. In the two years since, Generative AI has exploded.

“It’s really been an amazing trajectory, how models have evolved and the capabilities they have,” says Teresa Heitsenrether, EVP and chief data and analytics officer at JPMorgan Chase.

Chase named Heitsenrether to her post in June 2023. In a rare public interview, Heitsenrether unpacked Chase’s enterprise-wide AI strategy at The Clearing House Annual Conference this week in New York. She spoke with Bijan Chowdhury, TCH’s SVP, core product engineering, and fellow panelist Jason Kwon, chief strategy officer at OpenAI.

As Heitsenrether describes it, since GenAI amped the scope and breadth of AI’s potential in banking, data analytics and AI have proven to be not only a tech and risk challenge, but also a management challenge — specifically, people management.

On one hand, “we want people to adopt the technology and be excited about it, and one of the best ways to do that is to put their hands on it and use it,” says Heitsenrether.

On the other hand, employee enthusiasm has had to be contained as more people learned about GenAI’s potential, says Heitsenrether, and that has meant walking a fine line: Chase doesn’t want to stifle innovation, but innovation needs directing.

“You want to avoid the proverbial ‘thousand flowers blooming’,” says Heitsenrether. “You want to make sure that you’re focusing on the things that are going to add real enterprise value.”

Heitsenrether says it’s important to remember that artificial intelligence overall is not new to banking. “If you look at traditional predictive analytics and machine learning, banks have been using these capabilities in fraud detection, marketing and operations [for years],” says Heitsenrether. What has really changed, is how GenAI suddenly expanded the tasks AI can take on, multiplying its potential scale.

“When you think about its capabilities directly out of the box, and how applicable they are to so many different facets of banking, it really starts to open up the aperture in terms of how much value you can deliver,” says Heitsenrether.

Early Days for Today’s Artificial Intelligence at Chase

Heitsenrether believes that it is still “early innings” for what the latest wrinkles on AI can do for banking. Her conference co-panelist, Jason Kwon, agrees.

“I have no idea where we are on the Gartner ‘hype cycle’,” says Kwon. But the uptake for generative AI has gone from zero to hundreds of millions of weekly active users in a short period of time. Significantly, Heitsenrether says Chase has chiefly been using mainstream forms of the technology, rather than banking-specialized versions, though with an eye toward training GenAI on banking-specific functions.

Heitsenrether says that she believes that in time the proliferation of AI models will “emerge as a set number of models that are going to have a lot of similar capabilities. The differentiating factor is the data.”

“This is a technology that favors the incumbent to some degree,” says Heitsenrether. Banks have been regarded as deep stores of actionable data at least back to the early days of fintech. The trick was turning all that raw material into something more valuable.

Heitsenrether sees data becoming regarded as “a really valuable asset that goes into the models,” producing insights and innovation. In fact, Heitsenrether says that a bank in Chase’s league would never come to depend on a single AI provider.

“We want the flexibility to be able to swap models in and out, to choose the right one for the right use,” Heitsenrether says.

Read more: Inside Klarna’s High-Profile Giant Bet on AI

A Glimpse into Regulation and Supervision of AI at Chase

How regulators regulate AI is still evolving along with the technology, both in terms of rules and policies and in the day-to-day supervision of banks.

“The technology is moving so quickly,” says Heitsenrether, “that regulating the technology almost feels like a losing battle.”

That doesn’t mean regulators aren’t delving into AI. Heitsenrether and her team spend a good deal of time globally with regulators.

“They’re very interested in learning and leveraging the resources that we have, to understand the technology,” she says. “I find that incredibly encouraging. In most of the conversations that I’ve been having, the regulators seem to have what I think is the right mindset: They are focused on the outcome.”

Teresa Heitsenrether quote In most of the conversations that I’ve been having, the regulators seem to have what I think is the right mindset: They are focused on the outcome

What she means by that is this: However a bank accomplishes a task, whatever the tool or the technique, the institution remains responsible for the end result.

“So, the outcome is treating investors fairly, assuring fair access to housing or credit — all of the things that are already tenets of the banking and financial services industry,” says Heitsenrether.

OpenAI’s Kwon addressed regulation from the vantage point of AI technology’s continuing development.

“A really important principle is that if there’s going to be more regulation [of AI] then it has be very sensitive to the size of the company or to the amount of resources that the developer has at their disposal,” says Kwon. He maintained that continued innovation in AI is important to the U.S.

“If you actually pass a bunch of rules that are very, very proscriptive with the technology, you also risk locking in even bigger competitors, bigger companies, into a particular technology path,” Kwon insists. As a result, he says, “they may not continue to innovate.”

Kwon thinks governments must focus first on what risks they are trying to address with laws or regulations pertaining to AI.

“Can they be targeted about it, rather than having generalized concerns about risks and thinking that something needs to be regulated simply because of such general concerns?” he says. As part of what he described as a “surgical” approach to AI law, he suggested that different issues (such as privacy) and different industries (such as health care, financial services and legal) be treated separately, each to the degree deemed appropriate.

Read more:

How Chase Addresses AI Risk Management

Heitsenrether, picking up on her point that banking is already highly regulated, says it’s important to ask if the existing frameworks, mandated by regulators, concerning risks of technologies and financial models are working in the current context.

“So far,” she says, “the answer seems to be yes.” During the discussion she explained how Chase handles this challenge.

TCH’s Chowdhury asked Heitsenrether to what degree AI is a new type of risk or if it is an extension of risks that major banks must already guard against.

“This was the topic of many, many hours of conversation as we thought about the introduction of [more AI], particularly large language models,” Heitsenrether says. After much deliberation, she says, Chase decided that the risk framework it already had in place suited AI risks as well.

Ultimately, she says, it became a matter of where the emphasis of the usual risk framing is — and the checks and controls used to apply that to AI activities. Here’s how she spelled it out — with a kicker at the end about GenAI.

Model risk governance: Picking the right model for the task, which entails studying the model and evaluating the results of using it. This goes back to the issue of “outcomes.”

Technology risk management: Key elements here concern access controls and cybersecurity. One important issue concerns the use of cloud computing. Some data is so sensitive, says Heitsenrether, that “we would never put it in the cloud.”

Data risk management: How data is accessed by AI, managed, used — and where data streams and AI don’t touch.

Heitsenrether says Chase has authorized the use of large language models (LLMs) within the bank only in recent months. This is in spite, she says, “of a lot of unhappy people calling me for months about ‘When are we going to get this?’.” Approximately 200,000 people, out of Chase’s worldwide employee base of around 300,000, can now use LLMs from within an AI platform that the bank makes available for varying purposes, from preparing presentations to summarizing documents.

Heitsenrether says the bank doesn’t want to blow customer trust and bring on reputational damage. So, among other steps, “we make sure our data is not being used to train [LLM] models nor any devices.”

Operational risk management This has long been a key element of regulators’ risk management hierarchy, but Heitsenrether says that in the case of GenAI, operational risk has become “the dominant gene,” ranking above model risk.

The reason is significant. With GenAI, says Heitsenrether, “you can’t really explain what’s happening, all the time. You have ‘hallucinations.’ … That might be fine if you are creating a travel agenda for one of our credit card customers, but that might not be fine for making a lending decision where you have to be able to show that you can explain how the decision was made and replicate it.”

OpenAI’s Kwon says that explainability of AI decisions and outcomes varies somewhat by what is meant by it in context. He used an analogy to make his point.

“If you do something and I ask you why you did it, you’ll be able to give me an explanation. The models are capable of doing things like that,” says Kwon. “If I asked you exactly which neurons fired in your head, you’re not going to be able to tell me.” He says that getting AI answers on the latter is still under development.

Read more: Digital is Draining Banks’ Emotional Connections with Customers. GenAI May Make Things Worse

About Those Hallucinations…

Again relying on an analogy, Kwon says that “the hallucination issue really has to do with the reliability of sourcing” for GenAI.

Kwon says that if he asked someone a factual question, they might answer off the top of their head, and, if they’d studied the issue, they might give a reasonably accurate answer … however, it might not be 100% right. “But if I give you a piece of reference material and say to answer the question by first looking at the reference material and then give me an answer, then the accuracy rate should go up,” he says.

Kwon says that AI science has developed a technique called “RAG” — retrieval augmented generation — that he says can improve results. (Very simply, RAG tells GenAI what trusted source to use to research and frame an answer from — somewhat like telling a student to rely on one particular dictionary.) He predicted that techniques based on this concept will be used more in the future.

Heitsenrether suggested that the impact of imperfect information needs to be assessed in context and appropriate safeguards put into the workflow.

“People are expecting the models to be 100% accurate, right?” says Heitsenrether. “I’m not sure that humans are always 100% accurate.”

That says, Heitsenrether explained that concerns over accuracy are part of why Chase, to date, has no external-facing client use cases for GenAI.

“What we do have is a human in the loop,” says Heitsenrether. “As long as you insert somebody with judgment at the right points along the chain.” That “human in the loop” can go back and get a reference to the source the AI pulled information from, and cross-check.

In the meantime, while the human factor remains important, “we see a lot of really good results if you put limits around what you’re asking the model. It does accelerate what people do.”

Read more: Nine Takeaways from Citi’s Deep Dive into Gen AI and Banking

Can Banks Sit This Out? Yours might. Chase Won’t

Heitsenrether says she is frequently asked how much Chase is spending on AI.

“That is literally an impossible question to answer,” she says. The bank has a $17 billion technology budget. Some goes to cloud capabilities, some to data capabilities, some to cyber capabilities, and some to ongoing tech modernization, she says, but it is impossible to sift out how much of that relates to enabling Chase to tap AI in all forms.

Kwon insists that framing the discussion of AI spending around AI has the wrong end of the argument.

“It’s just a tool,” says Kwon. “You’re investing in customer service, you’re investing in fraud protection, you’re investing in new products and services. AI can help you in each of those areas.”

Is the price tag embedded in Chase’s overall budget number worth the results in those and other areas?

Realistically, says Heitsenrether, not adopting AI is “probably not a choice.”

Take use of AI in fighting fraud in multiple forms. “Regardless of whether you are adopting it, the bad guys are adopting it,” says Heitsenrether.

Beyond that, banking continues to face increasing expectations for improving efficiency.

Heitsenrether notes that consulting firm reports project that AI could boost banking institution efficiency by 25% to 40%.

If you believe such projections, she says, the last thing Chase wants is to wake up a few years from now and find that its cost to serve customers is markedly higher than competitors’ or that the bank’s agility has suffered.

“So, I think that the technology is here, and very much part of the fabric today,” says Heitsenrether, “and you really need to figure out how best to adapt it in your organization. I’m not sure it’s one of those things you can just decide is not on your list.”

This article was originally published on . All content © 2024 by The Financial Brand and may not be reproduced by any means without permission.