When Google’s Gemini chatbot manufactured historically inaccurate images of uniformed Black and Asian Nazis last month only weeks after its hyped debut, the high-tech blunder did more than highlight the human-governed risks of generative artificial intelligence. The debacle also surfaced an urgent question: If a sophisticated multinational technology giant can get things so wrong, how can banks and other financial institutions ensure the viability of their own projects?
What’s known as AI governance, which broadly means proper technology stewardship (whatever that means), is an evolving term of art with high stakes for banks. Get it right, and the banking industry could unlock additional value of up to $340 billion a year — equivalent to 15% of annual operating profits over 2020–2022 — through enhanced customer satisfaction, improved decision making and employee experience, and better monitoring of fraud and risk, according to McKinsey.
Mess it up, and a financial institution’s black-box model could fail to flag a looming wipeout on a large customer’s balance sheet or nascent cybercriminal attack or shoot out tone-deaf product pitches that cause financial and reputational damage.
“The Gemini incident shows that AI failures, if not handled properly, can lead to significant reputational damage, public criticism, and loss of trust, even for a respected brand like Google,” says Kartik Hosanagar, a professor of operations, information and decisions at The Wharton School. “Banks run on consumer trust, and anything that affects trust is very problematic.”
The Human-Created Guardrails of AI
IBM broadly defines AI governance as a series of “guardrails” that establish “the frameworks, rules and standards that direct AI research, development and application to ensure safety, fairness and respect for human rights.”
But banks must also worry about legal and regulatory rules, along with accuracy, trust, transparency and avoidance of biases and blindspots — the latter among the factors that tripped up Gemini.
Financial institutions have already seen — and continue to see — versions of this movie. In 2010, high-frequency trading algorithms fueled a “Flash Crash” that vaporized $1 trillion in value from U.S. stock indices. More than a decade after Wells Fargo settled claims of discrimination against Black borrowers, the Wall Street bank still faces a class-action lawsuit alleging that its computerized credit-scoring models discriminate against them.
The Latest Trends & Groundbreaking Innovations in Banking for 2025
Over 2,000 of the brightest minds in banking will be at The Financial Brand Forum in April exploring the big ideas and best practices that will reshape banking in the year ahead. Will you be there?
Read More about The Latest Trends & Groundbreaking Innovations in Banking for 2025
This FI Built Two Branches Without Adding Consumer Lending Employees.
Heartland wanted to expand. Being short-staffed made it hard. Here’s how deploying a new technology helped them build two new branches anyway.
Read More about This FI Built Two Branches Without Adding Consumer Lending Employees.
Banks have been using machine learning for decades to authenticate clients, analyze customer bases, liaise with customers through automated systems and offer robo-advice to retail investors. Conversational, or “weak,” AI, embodied by Siri and Alexa, relies on pattern recognition and rules-based decision trees.
Now the industry’s recent embrace of more complex generative AI models, which use larger data sets to create images and videos as well as text, is exposing a matrix of operational, regulatory, legal, reputational, ethical and societal risks.
“The technology is moving at warp speed,” says Robin Feldman, a professor at UC Law San Francisco. She argued in a recent academic paper that AI is a capacity, not an actor, and thus will require legal and regulatory frameworks that differ from the long-standing ones governing entities and individuals. While the rapid pace at which AI is advancing following ChatGPT’s debut in November 2022 “makes it difficult for anyone to keep up with it,” Feldman adds, banks “need to understand both the benefits and the harms that AI can bring.”
Read more:
- Exploring GenAI Options Beyond OpenAI’s ChatGPT
- True Conversational AI Solutions Eluding Financial Institutions
Trillions of Dollars in New Value
Globally, banks will spend $6 billion this year on AI technologies, but $85 billion by 2030 — a 1,400% rise in just five years, according to Juniper Research.
In a survey of 350 banking executives responsible for gen AI decision making in the U.S. by Google’s cloud computing services platform last October, 95% of respondents said the technology had the potential to transform the industry. Nearly four in 10 executives at national banks (38%) said that gen AI will deliver 61-80% in cost savings over the next five years.
Just over four in 10 said the technology will drive the most significant revenue growth by improving investment research (41%), while 38% said it would fuel more effective marketing/customer segmentation and better customer acquisition and retention strategies. Deutsche Bank sees the three big applications for banking as investment portfolio structuring, evaluation portfolio risk and tracking down cybercriminals.
Dig deeper: How to Fortify Bank Data Against Unexpected Cyber Threats
Amid all the potential, banks are pouring dollars into the technology. Nearly one in four global banks, insurers and investment firms cited “significant investments” in gen AI in a September 2023 survey by International Data Corp., a market intelligence company.
Nearly four in 10 said they were conducting initial testing and proof-of-concept projects. “A little over a year ago, generative AI was a relatively unknown technology,” IDC says in a report about its survey, adding that the adoption of the technology “is proceeding at a faster pace than even cloud, albeit with arguably more concerns about data security, privacy, and the accuracy of its models.”
The ‘Wild West’ of SEO
That latter message gets overshadowed by discussion of AI’s potential to reshape productivity and create trillions of dollars of new value.
That’s where the governance question comes in. What S&P Global call the AI governance challenge refers to the mishmash of global regulatory frameworks and proposals aimed at establishing a road map for businesses and organizations that use AI. Banks that “best utilize AI’s potential” could “tilt the competitive landscape” in their favor by unlocking new revenue opportunities and cost reductions, the ratings agency says. Those that don’t may fall behind on improving processes for risk management, loss mitigation, fraud prevention and customer retention and see assessments of their credit quality slip.
For banks, the challenge is akin to piloting a spacecraft without an instruction manual, training or a system of landing coordinates. “It’s a Wild West,” Feldman adds.
Whose All Investing:
Nearly a quarter of financial firms globally are investing in AI in significant ways.
At least one gauge purports to capture the wide variation in how banks are throwing themselves into AI. Last November, Evident Insights, a London-based startup that tracks the growth of AI across financial institutions, ranked 50 of the largest banks in North America, Europe, and Asia on their “AI readiness” according to talent, innovation, leadership and “transparency of responsible AI activities,” a criterion that includes governance. Overall, JPMorgan Chase came in first place; Bank of America was #15; Schwab ranked #47; and Raleigh, N.C.-based First Citizens was last. While Goldman Sachs and Citibank ranked in the top 10 overall, they had low marks (#49 and #31, respectively) on transparency.
If Gemini did one big thing for banks, it served as an object lesson on the multi-layered risks of deploying complex algorithms intended to reflect, predict and drive human and institutional behavior.
“Even a highly sophisticated tech company like Google struggled with putting proper guardrails in place for their AI system Gemini,” says Wharton’s Hosanagar. “Rigorous testing, evaluation, and implementing safeguards against biases or offensive outputs is absolutely critical before deploying AI systems, especially consumer-facing ones.”
Lynnley Browning is an award-winning business editor and writer who has worked at Bloomberg, The New York Times, Financial Planning magazine and Reuters, in New York and Moscow. She has a deep background in investing, tax, personal finance, retirement, wealth management and asset management.