Banks: Be Prepared to Explain ‘Explainability’ of AI

The increasing use of AI in evaluating and extending credit is reanimating long-standing concerns about fairness and transparency in lending. All too often, AI decision-making can resemble a "black box," frustrating borrowers and regulators alike. The CFPB recently highlighted the need for "explainability" in lending decisions – but what does that mean in practice?

The banking industry’s record of discrimination in loans and credit has not been pretty.

Repeated studies have found that Black and Latino Americans are much more likely to be unbanked or underbanked than white Ameicans, and that these populations continue to express distrust in banks and financial institutions.

Today, it is assumed that most respectable financial institutions do not intentionally discriminate against consumers based on race, gender, religion etc. — and this has been illegal for half a century under the Equal Credit Opportunity Act.

In one sense, any gaps actually represent missed opportunities for banks and credit unions to reach these communities. Nonetheless, financial models today are so complex that it is possible to generate discriminatory outcomes without anyone intending to do so. And so the gaps also represent potential liability for banks not prepared to enter the regulatory minefield.

Laura Kornhauser knows this from personal experience. A few years ago, while she had a good-paying job at JPMorgan Chase, she applied for a Chase credit card … and was turned down. She admits that she didn’t even need the credit card; she was taken in by the points offering. Getting rejected “was shocking to me,” she recalls.

“It opened my eyes to the inefficiencies and inequities of the way loan decisions are made.”

The Beginning of an ‘Explainability’ Firm

Kornhauser responded by founding her own company, the New York-based Stratyfy, which advises financial firms on decision making. One common way for banks to get tripped up revolves around the idea of “explainability.” Regulators have made it abundantly clear that a financial institution must be able to provide a clear explanation of why a given consumer was denied credit or a loan. This has become a particular challenge in the age of artificial intelligence; banks may well be using multivariable models that even the CEO can’t readily explain, much less the customer service staff.

In September, the Consumer Financial Protection Bureau (CFPB) issued an unambiguous report about the legal requirements to maintain explainability in the face of AI’s complexity.

“Technology marketed as artificial intelligence is expanding the data used for lending decisions, and also growing the list of potential reasons for why credit is denied,” CFPB director Rohit Chopra said in a statement. “Creditors must be able to specifically explain their reasons for denial. There is no special exemption for artificial intelligence.”

“Creditors must be able to specifically explain their reasons for denial. There is no special exemption for artificial intelligence.”
— Rohit Chopra, CFPB

Specifically, the CFPB put lenders on notice that they cannot use standardized forms to misrepresent the explanation for an adverse action. For years, lenders have used generic, government-issued checklists to explain to consumers why they were turned down for credit or a loan. It focuses on criteria that have been around for decades: insufficient income, lack of credit history, low credit score, etc.

But today’s lending and credit decisions may take into account dozens, even hundreds, of variables. As the CFPB put it: “These complex algorithms sometimes rely on data that are harvested from consumer surveillance or data not typically found in a consumer’s credit file or credit application.” The CFPB frowns on such data, which often have no direct relationship to decisions about credit.

But banning its use outright isn’t practical, and so the bureau hopes that forcing lenders to explain their methods to customers who are turned down will act as a deterrent to “black box” approaches. (And it’s questionable how effective black box techniques are to begin with; Kornhauser says the indiscriminate use of machine learning is “why machine learning technology has not delivered the value that it promises or the adoption that it promises in this use case.”)

Dig deeper:

Whether or not such deterrence actually works is difficult to measure; there is little evidence of widespread violations of the explainability principle. Still, the business of determining creditworthiness is complicated and constantly evolving, and there are incentives for various players to cut corners. In 2021, for example, the Securities and Exchange Commission (SEC) charged the alternative data provider App Annie with securities fraud, asserting that the company and its former CEO misrepresented how it used consumer data to build statistical models.

The SEC levied a $10 million fine.

Tips for AI ‘Explainability’ Success

The fact that enforcement of explainability has to date been fairly light underscores the fact that U.S. regulators hope to encourage AI experimentation. But that may change once the use of the technology — and inevitable mistakes — become more widespread. What steps can financial institutions take now to reduce the risk of unknowingly violating explainability?

Use models that are as transparent as possible. There is an entire branch of AI known as “explainable” AI (XAI), which seeks to make the actions of AI intelligible to humans. Deploying these methods should help a lender’s team best understand how AI decisions are made and on what criteria. “This allows financial institutions to remain in control of the model while also making it easier for all stakeholders to explain how the model arrived at a specific prediction or decision,” says Kornhauser.

Keep the garbage out. Even after years of studies showing that AI can too easily reproduce biases in underlying datasets, antiquated numbers and evaluation methods continue to trip up lenders. Robust data governance will not only help root out such biases, but should also improve explainability.

Ask for help. The CFPB has published a number of resources on this topic, and also created “sandbox” programs to help keep lenders in compliance.

Train yourself, and the team. The leadership at any lender using AI to make credit decisions ought to have a working explanation for its outcomes. But equally important is that the customer-facing team should have accurate, easy-to-understand methods for communicating with those who’ve been rejected.

Read more: Banks May Be Ready for Digital Innovation: Most Staff Isn’t

Bring in experts. Many banks and credit unions lack a large data science team needed to handle the complexity of these issues. But there are plenty of consultants who can work with lenders to critique existing systems and, where necessary, build new ones.

Monitor and regularly update models. “The credit market changes,” explains Kornhauser simply. “The dynamic nature of this problem means that you can’t set it and forget it.”

She believes that “regularly monitoring and updating models and decisioning strategies as needed ensures these models remain accurate amidst changing market conditions and compliant with evolving regulations.”

This article was originally published on . All content © 2024 by The Financial Brand and may not be reproduced by any means without permission.