What Do FDIC Examiners Think About AI? To Find Out, I Asked One

How are bank examiners incorporating issues around the use of AI in their work? The good news is that their interest in AI is consistent with the agency’s longstanding focus on risk management and compliance. But that doesn’t mean that you don’t need to be prepared to answer the specific concerns that AI use can raise.

Recently, I had the opportunity to meet with an FDIC examiner to discuss generative artificial intelligence usage. I haven’t heard of any other banker having a deep one-on-one with an examiner on actual AI usage, so I am sharing some key insights from that conversation, so you can be better prepared to develop policies, training plans, navigate risk and make informed discussions about AI at your institution.

Takeaway #1: Currently, the FDIC does not permit its staff to use ChatGPT or similar tools.

The FDIC’s perspective on AI is evolving, much like it is for everyone else in the banking industry. During my conversation with the examiner, I learned how examiners approach AI at this stage:

• Limited hands-on experience: FDIC examiners do not use generative AI tools themselves, which means their understanding remains largely theoretical and their perspectives come from other users or reporting. This is important because their lack of hands-on experience may create a gap between theoretical understanding and the transformative potential of AI technology.

• Understanding the basics: The examiners do have a good grasp of the foundational aspects of AI, but the real challenge lies in fully comprehending its potential and practical applications. There’s no substitute for being hands-on to appreciate the difference between knowing about AI and understanding how it can be used to transform processes, improve efficiency and manage risks in everyday banking operations.

• Risk management focus: The least surprising reminder was that their role is to manage risk in the banking system. They want to ensure that AI is being used responsibly and with appropriate safeguards, recognizing no activities are fully riskless. When discussing AI with examiners, it is crucial to articulate how your institution is managing and mitigating any risks associated with AI adoption.

The use of AI is a new and rapidly evolving, and the examiners clearly understood we’re in the early stages of managing this new technology. They were thoughtful, inquisitive, and genuinely interested in how we’re using it and how we’re managing risks. It was a very constructive conversation.

Takeaway #2: Your most likely first step into generative AI is ChatGPT or Microsoft Copilot, which means you will have a human in the loop working with AI and taking responsibility for the output.

Explainability is a key concept when using AI in banking, and it involves ensuring that a human is always part of the decision-making process. Most FIs maintain a human in the loop for AI tools, meaning that AI systems do not make final decisions autonomously. A human is part of the full process and reviews all AI-generated content to ensure it meets standards of accuracy, compliance and regulatory requirements before it is shared with clients. This is an important point to reinforce and ensure clarity with when talking to an examiner.

Emphasizing human accountability is important. View AI as a tool that complements human efforts, rather than replacing them. Integrating AI policies into everyday operations, such as IT usage and marketing, is crucial for responsible AI adoption and examiners will want to see the documentation. Reinforcing that AI-generated content flows through traditional review channels before use emphasizes a responsible, human-in-the-loop approach.

Throughout the conversation, I consistently highlighted responsible AI use. Their questions were easy to address by referring to our policies, training and human-in-the-loop processes, which demonstrated accountability for outputs. Importantly, clients never directly interacted with generative AI.

Key Takeaway #3: Setting clear policies and expectations is crucial to the responsible use of AI within your FI.

First and foremost, there should be defined IT usage policies for AI, even if your bank hasn’t officially embraced it. It is critical to avoid situations where an employee uses AI without clear guidelines, only to later mention it casually to an examiner, potentially escalating the issue.

It’s important to acknowledge that AI use is happening informally in many organizations. By aligning expectations with this reality, you can ensure a more consistent and transparent approach to AI adoption. Your AI policy is likely holistic to AI usage not specific to tools or platforms unless you’re explicitly allowing private data to be used in a certain tool. Stay ahead of potential issues, even if that means making a clear statement that the use of AI tools like ChatGPT, Microsoft Copilot or Google’s Gemini are not allowed. Clear expectations will create accountability.

Employees need to be informed about what constitutes acceptable AI usage, and examiners should not be caught off guard by unauthorized AI activities. Training plays a significant role here. Providing AI training, similar to other mandatory programs like OFAC, privacy and fair lending, will help reinforce best practices and standards.

Another vital aspect is setting explicit expectations with handling of private information. Your IT policies already address this topic; embedding AI into the policy removes ambiguity with a new technology. Don’t assume an employee recognizes that some AI tools do not safeguard private info.

Dig deeper:

Key Takeaway #4: You already have a foundation for safe, responsible use of AI, which provides a foundation for more dynamic use in the future.

When it comes to privacy, managing AI within an organization is very much about applying the same expectations that you have for any other technology that interacts with private or sensitive information. This is the language an examiner can understand and appreciate. You have transferable policies which can be applied to AI and communicated to your employees.

Another issue to consider is the practical challenge of restricting AI access. It is virtually impossible to block access to every AI tool, given the proliferation of these technologies and the fact that employees may use personal devices outside of work. Completely banning AI use without context is impractical and counterproductive, particularly given the ease of accessing these tools at home. Talking through your education and policies with examiners will help minimize these concerns.

Training is essential, and examiners want to know that you are educating your employees about important topics that manage risk. Employees need to be trained on how to use AI tools effectively, understand potential risks, and ensure data privacy compliance. Training helps in several ways:

• Understanding AI tools: Employees learn how to use AI tools effectively and responsibly, reducing operational risks.

• Risk awareness: Training raises awareness of the potential risks associated with AI, especially related to data privacy, bias and fact checking.

• Compliance reinforcement: Maintaining a human in the loop with responsibility for outputs and adherence to compliance improves your risk profile.

By focusing on policies, training, and ongoing dialogue about AI use, you can make strides in ensuring that your financial institution manages the risks associated with AI while leveraging its benefits. Demonstrating this level of preparation to examiners will underscore your institution’s commitment to responsible AI use and effective risk management.

Key Takeaway #5: Know your regulatory and government agency AI documents.

During my conversation, the examiner asked if I was familiar with the National Institute of Standards and Technology’s (NIST) AI guidance. We both laughed when I showed him the annotated copy sitting on my desk. NIST published Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile, which is very instructive on AI usage. Having working knowledge of AI content from NIST, Treasury Department, Consumer Financial Protection Bureau, and other government and regulatory agencies will give you the confidence you’re positioned correctly while demonstrating your commitment to responsible use.

The Financial Brand article “AI Now: Real-world Lessons to Enhance Your Marketing, Immediately and Safely” provides additional guidance and resources.

The opportunity to discuss AI with an FDIC examiner provided valuable insights into what regulators are currently concerned with and how they view the role of AI in the banking sector. The key to navigating these conversations effectively lies in being prepared, proactive and transparent. By establishing clear policies, providing thorough training and maintaining a strong focus on risk management, your institution will be well-positioned to leverage generative AI while addressing regulatory concerns.

Whether your financial institution is fully embracing AI or cautiously exploring its potential, staying ahead of the curve is essential. Ensuring that examiners see a well-thought-out approach—complete with robust policies, human oversight, and a commitment to responsible use — will not only make the examination process smoother but also build a strong foundation for the responsible adoption of AI technologies in this transformative and evolving technology.

This article was originally published on . All content © 2024 by The Financial Brand and may not be reproduced by any means without permission.