After the OpenAI Drama, Should Banks Hit the Pause Button on Generative AI?

The dramatic leadership struggle at OpenAI exposed a deep rift among artificial intelligence pioneers, between those pushing to accelerate its adoption and those increasingly worried about risks. No AI applications in banking presumably go so far as to threaten the future of humanity, as some AI skeptics worry. Nonetheless, this may be a timely moment to focus the industry's attention on AI's potential dangers and reinforce best practices in its use.

The November leadership drama that engulfed OpenAI spotlighted a deep debate about the development of artificial intelligence (AI). One camp, sometimes called the “boomers,” believes that AI innovation will come only through unfettered development. The other camp, dubbed the “doomers,” fears that AI will have harmful effects on society, and should be carefully managed, or even constrained.

While he has expressed concerns about some of AI’s implications in the past, OpenAI’s Sam Altman is generally considered a boomer; the OpenAI board members who originally ousted him (and who now have left the board) were more skeptical. The return of Altman was seen as a victory for the boomers. But the debate is far from over, and the episode may highlight the need to think again about the pell-mell race to expand AI’s use and capabilities.

chart showing the public concerns about the dangers of ai grows

Granted, the uses of AI in banking are unlikely to pose existential threats to the human race, as sometimes posited by the doomers. Nonetheless, as the the application of AI and machine learning to the industry speeds head with much excitement to the industry speeds head with much excitement, the debate spurred by OpenAI incident provides a useful opportunity for a reality check.

Dig deeper: How OpenAI’s Turmoil Could Impact Banking’s Use of Generative AI

Balancing AI Risks and Rewards in Banking

It has been obvious for years that AI has potentially powerful applications in the banking industry. Among the earliest and most urgent use cases was fraud detection, followed by its deployment in customer service environments. Patrick Reily, cofounder of the global credit assessment firm Uplinq and a pioneer in AI banking applications, refers to these uses as “baby AI—how do you prompt people to ask the right kind of things?” Over time, this has evolved into chatbots and similar interfaces, which are used by most large American banks today.

More complex AI and machine learning (AML) applications involve data analysis and streamlining compliance practices. Perhaps the most powerful use of AI is in credit risk assessment. Reily cites an example of a bank in Nigeria which, for a variety of reasons, had four percent of its loans as non-performing.

“There’s not a bank in the United States that would handle four percent NPLs,” he notes, but in Nigeria, rates between 4 to 8% are common. By applying AI to the bank’s risk assessment, Reily’s company was able to drastically improve the bank’s financial models, get the rate below one percent and reduce what was a two-week review process to a matter of seconds. “That is transformational,” he says with pride.

There is little doubt that banks and other financial institutions will continue to expand their uses of AI. Increasingly, AI is being used to build and maintain investment portfolios, to develop individualized training for employees and to support compliance processes. Soon, Reily predicts, AI will be used for more exotic applications, such as the valuation of portfolio sales and transactions and even to replace some functions of bank supervisors.

Read more:

The Potential Dangers of AI are Hiding in Plain Sight

You don’t have to be a “doomer” to understand the potential dangers to the expanding use of AI in financial settings. These include:

AI systems reproduce the biases that are fed into them. There are well-documented episodes of AI systems — in hiring, for example — reproducing existing biases based on the datasets and models they deploy.

When it comes to lending in the U.S., there are thorough regulations — such as Section 1071 of the Dodd-Frank Act — that make discriminatory lending illegal. It would be very easy to develop an AI algorithm — deliberately or inadvertently — that discriminated against lending to women-owned, minority-owned and small businesses.

Webinar
REGISTER FOR THIS FREE WEBINAR
Unlocking Digital Acquisition: A Bank's Journey to Become Digital-First
This webinar covers a comprehensive roadmap for digital marketing success, from building foundational capabilities and structures and forging strategic partnerships, to assembling the right team and more.
Wednesday, May 1st at 2pm EST
Enter your email address

Complexity alone can be a threat. Regulators have been clear for years that people who are turned down for credit or loans are owed a simple, easily understandable rationale as to why. But AI algorithms can create models based on hundreds, if not thousands, of variables, which can make it difficult for a typical customer service representative to explain the algorithm’s decisions and reasoning.

Cybersecurity dangers are enhanced. Ever since OpenAI’s ChatGPT was released in late 2022, many large companies have prohibited its use on company computers, out of a fear that sensitive company data might accidentally be leaked into the program, and thus become available throughout the Internet. This is especially a concern for banks, which maintain highly sensitive information about their customers. Moreover, as banks increasingly deploy AI solutions, cybercriminals will hone in on those operations, in hopes that they are the company’s most vulnerable.

chart showing the public perception of the impact of ai varying by use

Emerging Guardrails for the Deployment of Generative AI

Even though banks have used some forms of AI for decades, it is still the early days for the technology. Almost all markets, including banking, have yet to establish a regulatory framework for its use, and that could be years away. Nonetheless, a few best practices are beginning to emerge. They include:

Know what you’re doing. The bizarre drama of the OpenAI episode highlights the need for effective governance and a competent board of directors. “It’s rare that banks and credit unions possess the in-house expertise to give AI investments the necessary oversight.” Hiring programmers will not be enough; supervisors and salespeople will require extensive training.

Keep it as simple as possible. The complexity of AI is inevitable, given the sophistication of its algorithms. But everyone — from regulators to bank employees to customers — will benefit from as much transparency and simplicity as possible.

Keep it human. AI and machine learning programmers have a useful term: human-in-the-loop. That is, while AI algorithms can develop efficiencies and new methods of doing business, their performance should always be guided by a human being to ensure that a company’s basic values are being upheld.

There can’t be enough security. The data that banks possess is almost as important as the deposits they hold. Banking data breaches can be extremely costly, and AI is still new and unknown enough to create problems that legacy staff may never have contemplated.

Build a fence around lending. If there is any area where AI could most get a bank into trouble, it’s lending. It is far too easy to offer up products that are mismatched to customers, or to discriminate against applicants in potentially explosive ways. The banks that do AI best will be the ones that triple-check these areas of their business. Monitor and audit AI systems for all potential violations of ethics and compliance.

The positive news around AI and banking is that almost no one can claim to have figured the whole thing out. There is plenty of room for experimentation and innovation, so long as the necessary precautions are in place.

James Ledbetter is the editor and publisher of FIN, a Substack newsletter about fintech. He is the former editor-in-chief of Inc. magazine and former head of content for Sequoia Capital. He has also held senior editorial roles at Reuters, Fortune, and Time, and is the author of six books, most recently One Nation Under Gold.

This article was originally published on . All content © 2024 by The Financial Brand and may not be reproduced by any means without permission.