8 Mistakes That Will Guarantee AI Fails at Your Bank

Now that AI tools are here to stay, banks have to calm down, look beyond the hype, and adopt the technology in an intelligent way. A report from RAND Corp. draws on interviews with 65 AI-focused data scientists to identify key reasons that as many as four out of five attempts to use the technology fail to realize the intended benefits.

Banks and credit unions venturing into the development of any type of artificial intelligence application need to go in for the right reasons, with reasonable expectations, and with their eyes open. Otherwise, they’ll have a good amount of company in the AI graveyard, according to a study from RAND Corporation.

Failures include not only flops that waste time, talent and money, but “successful” efforts that produce dubious results.

The research organization reports that the failure rate for AI projects can run as high as 80%.

“Despite the promise and hype around AI, many organizations are struggling to deliver working AI applications,” RAND states in its report. “… Managers and directors find themselves under enormous pressure to do something — anything — with AI to demonstrate to their superiors that they are keeping up with the rapid advance of technology.”

To get at the causes for so many flops, RAND analysts conducted in-depth interviews in the latter half of 2023 with 65 experienced data scientists and data engineers in both industry and academia.

The reasons compiled from the organization’s interviews frequently result from human errors, from the decision to deploy AI tools where they may not be appropriate, to training AI with “dirty” data that produces questionable results.

Most striking: Rand argues many business leaders fundamentally misunderstand the very nature of AI, setting them up for failure from the get-go.

A key point: “Many business leaders … do not realize that AI algorithms are inherently probabilistic: Every AI model incorporates some degree of randomness and uncertainty. Business leaders who expect repeatability and certainty can be disappointed when the model fails to live up to their expectations, leading them to lose faith in the AI product and in the data science team.”

AI, especially GenAI, has become as popular in banking as innovation labs were a few years ago, and much that came with them. In the study, RAND points out the downside of some popular innovation approaches. One such is the agile software development approach of the “sprint” — a concentrated effort to produce new applications in a pressure-cooker environment.

This doesn’t always work, according to the study: “One interviewee noted that, in his experience, work items repeatedly had to either be reopened in the following sprint or made ridiculously small and meaningless to fit into a one-week or two-week sprint.”

Rand’s study identifies eight common reasons for AI project flops:

1. Technology Overkill — Using AI to Solve Simple Challenges

Data scientists told RAND interviewers that business leaders often latch onto AI as their desired solution for a problem because it has buzz.

“As one interviewee explained,” said the report, “his teams would sometimes be instructed to apply AI techniques to datasets with a handful of dominant characteristics or patterns that could have quickly been captured by a few simple ‘if-then’ rules.”

The AI solution achieved from a process like this might even work. However, the study points out, “while these types of projects might succeed in a narrow sense, they fail in effect because they were never necessary in the first place.”

Also, this: “Successful projects are laser-focused on the problem to be solved, not the technology used to solve it.”

Read more: How Associated Bank Uses Data Analytics to Speed New Products and Drive Growth

2. Putting Too Much Faith in AI Techniques

It’s hard to blame business leaders for treating AI as a panacea. Much of what they hear makes it sound like the answer to everything. The study says it’s time to see past hype and sales presentations.

“Optimizing an AI model for an organization’s use case can be more difficult than these presentations make it appear,” the study says.

Read more: An AI System Built for Everything is an AI Built for Nothing

3. Trying to Solve the Wrong Business Problem Using AI

Often, the study found, company leaders don’t target the right problem and instead direct data scientists to solve what they think the issue is. Then the data people are on their own.

“In failed projects, either the business leadership does not make themselves available to discuss whether the choices made by the technical team align with their intent, or they do not realize that the metrics measuring the success of the AI model do not truly represent the metrics of success for its intended purpose,” according to the report.

Read more: Digital is Draining Banks’ Emotional Connections with Customers. GenAI May Make Things Worse

4. Expecting AI Models to Learn Your Business Overnight

In a nutshell, the data scientists told RAND, leaders have no idea how long it takes to get AI right.

“They expect AI projects to take weeks instead of months to complete,” the report says, “and they wonder why the data science team cannot quickly replicate the fantastic achievements they hear about every day.”

A key point for leaders to understand is that even if a so-called “off-the-shelf” AI model is purchased, the journey doesn’t end there. The model still has to be trained on the company’s own data. Until that happens, “it may not be immediately effective in solving the specific business problems.”

Read more: Why it’s Time for Banks to Hire a Chief AI Officer — and What That Looks Like

5. Treating All the Data in Your Bank As If It Is Gold

A decade ago, a common theme among innovation chiefs bringing fintech thinking into traditional banks was that the institutions were gold mines of rich data that could be used to build models. A lesson from the RAND analysts’ interviews is that data varies in quality and some data may not be clean enough to train AI models on reliably.

Sometimes data is simply “dirty” — it was carelessly collected or logged and can potentially taint the end result of AI solutions. Sometimes it was gathered for compliance purposes, rather than for strategic value. In other cases, data that an organization has may lack context that other data that would have provided. That deficiency may make the model builders’ job harder.

“Even if an organization has a large quantity of historical data, that data may not be sufficient to train an effective AI algorithm,” the study says.

Subject matter experts may help with such challenges. Such experts can work with data scientists to help them understand the mission they’ve been asked to address with AI tools. The study points out that these staffers “can explain what the elements in a dataset mean and which ones are — and are not — important or might be unreliable.”

Here’s the rub: Often these experts drag their feet on cooperating with the data scientists simply because they expect AI to kill their jobs, according to the report. (Deep Dive: In “Why AI Tech Projects Often Flop in Banking and What to Do,” Olga Tsubiks, director of strategic analytics and data science at Royal Bank of Canada, describes how such resistance helped delay implementation of AI solutions for years.)

Read more: Deploy AI Without Triggering Employee Alienation and Burnout

6. Not Employing Enough ‘Plumbers’ to Get the AI Job Done Right

Beyond fear of AI destroying one’s job is the perceived difference in status between “data scientist” and “data engineer.” The report points out that datasets generally need at least some cleanup and that this takes data engineers. One data scientist interviewed referred to as “the plumbers of data science.” Without dedicated data engineers — and enough of them — good model training won’t get done.

Said one expert interviewed by RAND: “80% of AI is the dirty work of data engineering. You need good people doing the dirty work — otherwise their mistakes poison the algorithms. The challenge is, how do we convince good people to do boring work?”

Read more: Why the AI Revolution Is Being Led from Below

7. Investing Insufficiently in Tools to Use What AI Builders Develop

A related point in the report concerns infrastructure — ensuring that the data engineers have the tools to get their jobs done right on an ongoing basis. AI models need continuous feeding with fresh data, the report maintains, and the engineers need tools to automate the cleaning and delivery of good data to the models.

And there needs to be a firm bridge between the development function for AI and its everyday application. The study indicates that some interviewees had seen situations where AI models couldn’t be used because the production systems weren’t compatible with the development model.

“Investing in data engineers and machine learning engineers can substantially shorten the time required to develop a new AI model and deploy it to a production environment, where it can actually help end users,” the report says.

Read more: Why the Power of GenAI Lies in the Augmentation, Not Automation (or Replacement), of Bankers

8. Letting Data Scientists Learn about New AI on the Company Dime

Actually, while leaders may want to be able to brag about applications of AI in their organizations, IT staffers aren’t immune to the lure of the cool.

“Technical staff often enjoy pushing the boundaries of the possible and learning new tools and techniques,” the report says. “Consequently, they often look for opportunities to try out newly developed models or frameworks even when older, more-established tools might be a better fit for the business use case.”

After all, being able to cite experience with the latest tools looks better on a resume or a LinkedIn page than doing the same-old, same-old.

Read more: AI-Assisted Lending Could Boost Small Banks. But Regulatory Fear Stifles Innovation

This article was originally published on . All content © 2024 by The Financial Brand and may not be reproduced by any means without permission.