The use of artificial intelligence technology in banking has great potential, much of it still untapped. It’s use in powering chatbots and digital assistants using natural language processing is of the best-known AI applications.
AI can also be used as part of data analytics, helping banks and credit unions detect fraud more quickly on the one hand and create more personalized customer messaging and offers on the other. Significantly, AI can help make institutions — bank and nonbank — make faster lending decisions.
However, there is downside to the use of artificial intelligence, the consequences of which loom ominously for banks and credit unions. That is the inability for banks that implement AI to explain how it arrived at a certain outcome. In computer engineering circles this concept is known as a “black box” — which describes a system which can take inputs and provide outputs, but not provide analysis of how those outputs were determined.
What's Inside the Black Box:
Banks often have trouble explaining why an AI model arrived at a certain decision, especially why a loan application is denied or approved.
This is increasingly becoming an issue in financial services. Many regulators are already looking at how AI in lending might lead to model bias, for example. Additionally, The Consumer Financial Protection Bureau (CFPB) in February said it has “outlined options to ensure that computer models used to help determine home valuations are accurate and fair.”
Many banks face these explainability challenges when it comes to deploying their AI models, according to a report from Deloitte.
“The ‘black-box’ conundrum is one of the biggest roadblocks preventing banks from executing their artificial intelligence (AI) strategies,” the report states. “Machine learning models tasked with identifying patterns in data, making predictions, and solving complex problems are often opaque, obscuring their under-the-hood mechanisms. Deploying such models without explainability poses risks.” (Machine learning (ML) is often regarded as a subset of artificial intelligence._)
The Latest Trends & Groundbreaking Innovations in Banking for 2025
Over 2,000 of the brightest minds in banking will be at The Financial Brand Forum in April exploring the big ideas and best practices that will reshape banking in the year ahead. Will you be there?
Read More about The Latest Trends & Groundbreaking Innovations in Banking for 2025
Win the Battle for SMB Deposits with Vertical Thinking
Join Nymbus CEO Jeffery Kendall and Nick Kennedy, author of The Good Entrepreneur, for the strategies your bank needs to win deposits and drive growth in 2025 and beyond.
Read More about Win the Battle for SMB Deposits with Vertical Thinking
Risks Grow Under CFPB’s Spotlight
A lack of explainability can preclude many banks from taking advantage of cutting-edge AI applications, Alexey Surkov, a risk and financial advisory partner for Deloitte, tells The Financial Brand.
“Banks’ natural risk aversion has probably kept them from using advanced technologies such as AI and ML, and for good reason,” he states. “New models and new technologies come with new risks.” Surkov believes that as banks get better at managing those risks, and as regulators get more comfortable that banks have these risks under control, there will likely be more use of these advanced techniques. That may not happen quickly.
Under the Biden administration, the CFPB in particular has taken a more aggressive approach to oversight of banking practices including use of AI in lending. The Venable law firm notes that CFPB warned financial institutions in a May 2022 circular that anti-discrimination law applies to use of black-box credit models.
“According to the CFPB, the anti-discrimination law requires companies to explain to applicants the specific reasons for denying an application for credit or taking other adverse actions, even if the creditor is relying on credit models using complex algorithms,” Venable states.
Read More:
- Artificial Intelligence in Banking: Top Priorities for 2022 (And Beyond)
- The Hidden Risks of Artificial Intelligence in Bank Marketing
- Three Practical Applications for Artificial Intelligence in Payments
The Answer: Explainable AI
One major step banks can take in order to get better at managing such risks and deflecting regulatory criticism is by implementing “explainable AI.” The emerging field of explainable AI (or XAI) can help banks navigate issues of transparency and trust, and provide greater clarity on their AI governance, says Surkov
XAI, broadly speaking, “is ultimately about demystifying the decision process of AI models and breaking them down into something us humans can follow and understand,” said Surkov. XAI is not a software application but a set of processes and methods.
Ultimately, XAI aims to make AI models more explainable, intuitive, and understandable to human users without sacrificing performance or prediction accuracy. Deloitte gives these four tips for how explainable AI should be understood, and eventually implemented.
- XAI should facilitate an understanding of which variables or feature interactions impacted model predictions, and the steps a model has taken to reach a decision.
- Explanations should provide information on a model’s strengths and weaknesses, as well as how it might behave in the future.
- Users should be able to understand explanations — they should be intuitive and presented according to the simplicity, technical knowledge, and vocabulary of the target audience.
- In addition to insights on model behavior, XAI processes should shed light on the ways in which outcomes will be used by an organization.
How Banks can Effectively Deploy Explainable Methods
The discipline of XAI is relatively new, and banks and credit unions will need to develop a plan to implement it alongside their AI models, says Surkov. This will likely mean introducing new policies and methods, from the pre-modeling stages to post-deployment monitoring and evaluation, and it will also require every employee or vendor who contributes to AI model development to act purposely and intentionally with each decision they make.
The first step towards explainable AI is robust governance. Banks and credit unions need to know what their AI objectives are and how they pertain to regulatory issues and the institution’s own ethical and business objectives. “This may sound like table stakes, but these tables stakes are not always implemented at the start,” Surkov said. “In many institutions a lot of the AI model building is decentralized.”
The Right Recipe:
Banks need to 'bake in' explainability from the start of creating their AI models and create controls around factors such as risk appetite and privacy issues.
After that step, banks and credit unions need to create controls around considerations such as their appetite for risk, fairness, robustness, preservation of privacy, and other factors.
Furthermore, banks need to build into AI models considerations of risk, including explainability, from the start. It should not be an afterthought, Surkov recommends.
“You don’t want to have to explain after the fact why an AI model is making certain decisions; it may be too late at that point,” he adds. “Banks should build this in at the model development stage before it moves into production.”
There are technology tools to help do this, including tools available from primary AI vendors that banks work with, as well as additional technology solutions that “specialize in injecting explainability.”
Become Active In the XAI Development Community
The Deloitte report also advises banks to engage in ongoing education around XAI principles — and not just internally.
Financial institutions should consider partnerships with think tanks, universities and research institutions, which bring together credit providers from around the world to develop common guidelines, the report suggests.
“In addition, it’s important for banks to be active participants in conferences and workshops that cover emerging XAI topics and collaborate on research that can drive the field and its practical applications forward,” the report continues. “They can also push vendors to continue making prepackaged models more explainable, so they can more easily adopt third-party solutions.”