ChatGPT Will Become ‘ChatOMG!’ in 2024, Forrester Predicts

As the use of ChatGPT and other large language models become more prevalent, there will be trouble. Forrester says eight neobanks and two large traditional banks will run afoul of regulators and consumers in 2024. Tightening up controls and compliance with those controls is a key starting point.

At least 10 U.S. banking providers will get tripped up by generative artificial intelligence in 2024, according to Forrester.

The consulting firm foresees some combination of regulatory damage and consumer lawsuits befalling eight neobanks and two major traditional banks, after rogue employees or third-party vendors sidestep internal controls to use genAI.

“ChatGPT will turn into ChatOMG!” as a result of this lack of genAI control, its report says.

The Forrester prediction came out the same day that the Biden administration issued its third executive order concerning use of artificial intelligence by U.S. businesses. Parts of the lengthy order — which runs 70 pages in all — form a “to do” list for federal agencies, including banking regulators. The result will likely be increased regulation and examiner attention in this area.

Many banks control or bar genAI except in officially sanctioned pilots, Forrester noted. Ally Bank’s approach to genAI in marketing — which Andrea Brimmer, chief marketing and PR officer, detailed in an interview with The Financial Brand — is an example of going the “pilot” route.

But Forrester anticipates situations where bank employees or outside vendors inadvertently trigger issues for their organizations, whether through carelessness or by flouting the rules. It says they could cause harm in any number of ways, including by violating a copyright, accidentally using consumer information, failing to offset AI-introduced bias, polluting synthetic data, and opening up the bank to consumer compensation claims. (Synthetic data is information generated by algorithms to test systems. It is used to imitate real-world data to avoid any possibility of compromising confidential customer information.)

Beyond this forecast, Forrester also says that its security and risk team predicts more data breaches as well as fines resulting from security flaws in computer code generated by artificial intelligence.

Forrester on Generative AI: ‘Everyone’s at Risk’

“We think the potential risk here is widespread,” Peter Wannemacher, principal analyst at Forrester, says in an interview with The Financial Brand. “In every financial organization, it’s very easy for an employee to accidentally expose data that should not be exposed to a large language model.”

Large language models are fed vast quantities of writing so they can “learn” how to create writing of their own, in response to prompts. The most commonly known LLMs are OpenAI’s ChatGPT, along with other GPT models, and Google’s Bard.

Consumer banking is the sector of the industry most at risk, according to Wannemacher.

“Large language models are pretty darn opaque, and banks know that. So they’re designing their compliance and regulatory efforts to try to avoid any exposure. But there’s going to be slip-ups.”

— Peter Wannemacher, Forrester

Forrester’s prediction is oddly specific. But Wannemacher stressed that the firm does not have a secret list of the banks and neobanks headed for trouble. The analysts drew on their collective experience in banking to assess the likelihood of things going wrong.

“We don’t predict that any of the 10 will be malicious actors,” Wannemacher says. “There will be malicious actors using genAI, but in 2024 the problem will be errors by actors within a company, the bank or neobank.”

Large institutions typically have stronger controls and governance structures than smaller ones. They also have experts on board with years of experience sorting out how to ensure regulations are followed. But there are many places where a leak can happen, which is why Forrester expects at least two sizable banks to run afoul with genAI.

Want to go deep on AI best practices for banks?

Attend our AI Masterclass — Unlocking the Power of Artificial Intelligence in Banking — at The Financial Brand Forum 2024 on May 20-22 in Las Vegas. Led by Ron Shevlin, chief research officer at Cornerstone, this three-hour workshop will be jam-packed with lessons learned from industry leaders and real-world case studies.

For more information and to register, check out the Forum website.

In comparison to those large institutions, neobanks simply have far less bench strength. The increasingly popular banking-as-a-service arrangements also create extra risk for them. Neobanks using those arrangements must avoid misuse of genAI on two fronts — by their own organization and by their bank partners. With the experience gap and the expanded exposure to risk, Forrester thinks genAI will seriously trip up at least eight of them.

In an article titled “AI Governance in the Financial Industry” from the Stanford Journal of Law, Business & Finance, the authors suggest that it may be difficult to assign blame for genAI missteps: “Different financial market players will have different levels of obligation and liability. Following this pathway allows a more comprehensive framework in which all actors can bear some amount of obligation and responsibility. No one can simply say, ‘It’s the artificial intelligence’s fault.'” (Acting Comptroller of the Currency Michael Hsu cited the article during a speech.)

Wannemacher stresses that no company ought to be relaxing in genAI’s early days.

“I’m not trying to be a fear mongerer,” says Wannemacher, “but everyone’s at risk of being one of those 10. The nature of something as new as ChatGPT and as newly widespread as LLMs is that it is very easy to run afoul of controls and accidentally expose your company and your customers’ data.”

Read more: 4 Steps to Dodge Trouble When Using Generative AI

Webinar
REGISTER FOR THIS FREE WEBINAR
Unlocking Digital Acquisition: A Bank's Journey to Become Digital-First
This webinar covers a comprehensive roadmap for digital marketing success, from building foundational capabilities and structures and forging strategic partnerships, to assembling the right team and more.
Wednesday, May 1st at 2pm EST
Enter your email address

The Regulatory Viewpoint on Generative AI

Federal banking regulators have been looking at how the industry uses artificial intelligence for some time. This includes a major interagency fact-gathering effort in 2021.

Acting Comptroller Hsu, in a June speech discussing genAI and other AI tech, suggested following three guidelines: innovate in stages, build the brakes while building the innovation engine, and engage with regulators early and often. He also urged having risk and compliance professionals at the table throughout the process, from gestating ideas to implementing them.

“Asking for permission, not forgiveness, from regulators will help ensure the longevity of rapid and transformational innovations,” Hsu said. “The pressure to be a first mover and take advantage of network effects can incentivize firms to release first and engage with regulators later. This ‘ask for forgiveness’ approach may work in certain technology contexts. But it doesn’t work in banking and finance, where public trust is critical to long-term product success, and regulatory approval is a proxy for that trust.”

Loan bias that develops inside the black box of artificial intelligence based on its “learnings” has been a concern for years. Hsu’s comments urging caution echo those of many others, including Rohit Chopra, the director of the CFPB, and members of the Federal Reserve Board.

In announcing the “Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence,” President Biden spoke of requiring companies to prove that “their most powerful systems are safe before allowing them to be used.” He said this would include sharing the results of independent testing. His speech did not give details, and the devil will be in the details. But one of the requirements he touched on is the Department of Commerce’s task of helping to develop standards for “watermarking” genAI-created content.

The CFPB and housing regulators are tasked by the executive order with preventing bias in housing and housing finance arising from AI. Cybersecurity risks for financial institutions arising from AI will be the subject of a mandatory report by the Treasury Department.

Read more:

AI Governance for Financial Institutions

A separate Forrester report, “Get AI Governance Just Right,” makes some key points for financial institutions.

Forrester defines governance of artificial intelligence like so: “Practices that business leaders adopt to incorporate purpose, culture, action, and assessment to ensure AI delivers desired business outcomes, is responsibly used, and complies with application regulations.”

The report explains that adopting a “hub and spoke” approach will help an institution better monitor the use of AI internally.

That’s because, as AI permeates the workplace, much will lie in employees’ hands. They’ll be figuring things out as they go as an increasing amount of what they do incorporates AI. On the other hand, building in two-way communication may help prevent the risk of rogue employees getting the entire organization in trouble through unauthorized use of a large language model.

Whenever possible, AI governance should follow the structure of existing practices and roles at an organization, the report says. This can reduce friction as AI comes under the governance framework.

This article was originally published on . All content © 2024 by The Financial Brand and may not be reproduced by any means without permission.