How to Mitigate Risk and Consumer Fear As Banks Adopt GenAI

Backers are convinced that generative artificial intelligence can improve banking services and efficiency in numerous ways. But can the industry achieve both without exposing customers to data hacking or falling victim to AI 'hallucinations'? Mastercard research takes a deep dive into the key dilemmas.

The report: Generative AI: The Transformation of Banking

Published: December 2023

Source: Mastercard

Why we picked it: A good deal of what’s written about ChatGPT and other forms of generative artificial intelligence amounts to fanfare — and by that we mean fare for fans. Some authors exhibit not the slightest skepticism or caution about GenAI.

Mastercard’s “paper” — actually, it’s a long interactive web presentation — presents a positive yet balanced picture. The report focuses exclusively on banking and is a companion to the company’s earlier report about GenAI’s potential impact on commerce. It details what the banking industry could gain from adopting GenAI (especially large language models like OpenAI’s ChatGPT family) in 10 areas, ranging from talent-spotting and marketing to the creation of next-generation chatbots. The flip side concerns multiple risks GenAI users — and their customers — will face.

Executive Summary

“The rate at which the tech becomes commonly adopted should hinge on the industry’s ability to ensure the accuracy of outputs, integrate safeguards and ethical standards, and comply with global regulations.”

This early statement sets up much of the report that follows.

It’s a tight package, highlighting pros and cons as the industry moves from toe-dipping on internal uses and finally external applications. Some sections contain thumbnail case studies of how financial institutions have used GenAI in each area. Beyond that, the study also sketches out challenges that GenAI will present for banks and credit unions and how those challenges could be mitigated.

3 Points Banks and Credit Unions Must Ponder About GenAI Implementation
Mastercard’s GenAI study poses three questions that any banker involved in evaluating or applying the technology could stick on their wall as they work:

• How can banks protect information while using AI to enhance service delivery?

• What measures can ensure data integrity in light of cyber threats and misinformation?

• Can AI systems serve all customers equitably, free from prejudices that may impact human decision-making?

Not exactly Isaac Asimov’s Three Laws of Robotics, but it’s a decent yardstick.

We learned from this report that the industry may need to move beyond treating the big name large language models as Swiss Army knives that can tackle any task. Large language models can be designed for specific tasks — Google’s SecPaLM is used for security applications, for example — or the big-name ones can be custom fitted.

In fact, as use of GenAI becomes mainstream, the report suggests that institutions may tap multiple programs, depending on what they are trying to achieve or solve. (Take a deeper dive: Ally Financial developed Initially it will use ChatGPT, but this hub may eventually include other large language models as well.)

Another interesting section covers the radically different ways five countries and the European Union have chosen to govern use of GenAI.

Read more: ChatGPT in Banking: Balancing Its Promise and Its Risks

Data-Driven Growth: Unveiling the Secrets of Successful Banking Acquisition
Learn the dynamics of the deposit-driven market, strategies for enhancing consumer loyalty, and the role of data and marketing intelligence.
WEDNESDAY, March 13th AT 2:00 PM (ET)
Enter your email address

Key Takeaways

• The financial services industry has greater exposure to “hallucinations” and “mirages” — errors and inaccuracies that arise from poor GenAI training or imprecise prompts input by AI programmers. “False data could mislead investors or, in extreme cases, shock the economic system,” the study says.

• So far this challenge has been addressed by banks using GenAI in conservative, carefully controlled ways. For example, some use it as a glorified Google, generating research a human will work with. One leading-edge solution: “immunizing” large language models by intentionally feeding them prompts that are false or challenging. The idea is that the AI will learn to deal with such.

• “GIGO” (“garbage in, garbage out,” decades old in computing circles) still applies. Feeding AI systems cleansed data will produce better results.

• People themselves may help keep AI honest. Mastercard suggests human reviewers from both inside and outside (who may not even be familiar with the technology) should have an important role advising how a bank should uses AI.

• AI enhancements may hang on how willing customers are to allow institutions to track their use of a bank’s apps. Institutions will likely seek their permission before using such data.

Read more: Why AI Tech Projects Often Flop in Banking and What to Do

Our Take

What we liked: This is a strong primer for any bank or credit union that has started to explore GenAI. Featured quotes from Mastercard experts are pithier and more relevant than such popouts usually are.

What we didn’t: While there are thumbnail GenAI use cases, all are success stories. There aren’t any problem thumbnails — or even horror stories. (Take a deeper dive: ChatGPT Will Become ‘ChatOMG!’ in 2024, Forrester Predicts )

Pet peeve: Not a fan of scrolling through yards of fancy graphics when the interaction is minimal. How about a nice clean PDF alternative?

Things that made us go “Hmm.” In a section about protecting customer and bank data , we encountered this sentence: “Though currently in its early stages, homomorphic encryption, a technique that allows computations to be performed on ciphertext, could play a role here in the future.”

Curious, we looked up some terms. “Ciphertext” is plain text that has been run through a cipher — an algorithm — to render it unreadable without the algorithm. “Homomorphic encryption” means a technique for encrypting that produces ciphertext that can be run through computations in its encrypted form.

“This allows data to be encrypted and out-sourced to commercial cloud environments for processing, all while encrypted,” says the Wikipedia entry on homomorphic encryption. The idea is to eliminate a weak point where hackers might infiltrate and grab data while it is in the clear.

Quotable: “Neural networks’ inner workings are often as enigmatic as they are complex, making it hard to discern the ‘why’ behind an AI’s decision.”

Read more:

6 Ways Governments are Trying to Oversee Adoption of GenAI

Around the world, governments have been trying to get their arms around this technology:

1. United States: The Biden administration has generally relied on executive orders on AI, the most recent issued in October 2023. Legislative activity has also occurred at the federal level, and some states are taking their own measures.

2. China: The government was early with AI rules. Training data fed to GenAI is subject to stringent control.

3. European Union: The EU’s Artificial Intelligence Act, pending as yearend approached, is intended to balance public benefits against AI’s benefits. The Act is expected to see final passage in early 2024. It has been described by an expert on E.U. law as “the world’s first comprehensive, horizontal and binding AI regulation.”

4. United Kingdom: The U.K. plans to rely on existing laws and rules to govern GenAI, although it may include rules governing specific issues as they arise. The report says that the U.K. wants to ensure that innovation isn’t stifled.

5. Brazil: An interesting twist here is that AI providers must be able to explain how their solutions produced specific results. Another twist: Consumers have the right to know when they are communicating with AI.

6. India: India’s regulators are attempting to strike a balance between risk mitigation and innovation. The government also plans to set ethics guidelines for GenAI development.

Want to go deep on AI best practices for banks?

Attend our AI Masterclass — Unlocking the Power of Artificial Intelligence in Banking — at The Financial Brand Forum 2024 on May 20-22 in Las Vegas. Led by Ron Shevlin, chief research officer at Cornerstone, this three-hour workshop will be jam-packed with lessons learned from industry leaders and real-world case studies.

For more information and to register, check out the Forum website.

This article was originally published on . All content © 2024 by The Financial Brand and may not be reproduced by any means without permission.