Everyday AI for everyone is here, and getting started is much easier than you think. Lost in the AI hype is the realization that meaningful AI usage is available to you today — you can unlock real productivity, creativity, and skill improvement immediately. By thinking critically about the current regulatory environment, how to create a policy with guardrails and build a pilot team, and how AI can easily be embedded into current procedures, you can get started today.
Many banks and credit unions are getting lost in the hype of AI, thinking about big picture solutions, moon shoots, and large investments in time, money, or resources. Most of those AI solutions are simply out of reach currently. But everyday AI, using tools like ChatGPT, Microsoft Copilot, and Google Gemini, can immediately give your employees economical solutions to work better and smarter, effecting your bottom line and client experience. These are solutions you should be piloting while encouraging responsible usage. A well-defined roadmap is not hard to create and will pave the way for a successful AI journey.
You can get started quickly by appreciating the current regulatory landscape, focusing on the importance of human involvement, and outlining basic steps for successful everyday AI usage. Kicking off a pilot team to find usage cases relative to your financial institution, as well as to each role, doesn’t take a lot of effort. You already have a foundation in place which can easily be updated for AI.
Regulatory Snapshot and Keeping Humans-in-the-Loop
Detailed AI regulations have not been definitively issued but work is taking place to understand the opportunities and risks with this emerging technology. There are excellent sources which can help lead financial institutions to making responsible decisions as they begin to apply AI. Some examples include:
- The Biden Administration released Executive Order (EO) 14110 on Safe, Secure, and Trustworthy Development of Artificial Intelligence on October 30, 2023. The EO created greater direction to government agencies, requiring them to take action on assessing how AI may impact their industries.
- The Treasury Department issued a Request for Information seeking feedback on the use of AI in financial services, focusing on its applications in areas like fraud detection and personalized products. Their goal is to understand both the opportunities AI presents, such as improving efficiency and decision-making, as well as the risks, including privacy concerns, algorithmic bias, and financial instability.
- The Bipartisan Working Group on AI Staff Report outlines key insights into the use of AI in financial services, housing, and related sectors. The report highlights how AI, including machine learning (ML), is used for fraud detection, underwriting, customer service, and market surveillance.
- The National Institute of Standards and Technology AI 600-1 Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile is a guideline developed to help organizations manage risks associated with AI, in response to Executive Order 14110. It outlines the specific risks unique to or exacerbated by AI, such as false content, data privacy concerns, and algorithmic bias.
There are many additional documents and publications, but these are starting points to help bankers understand how the government is thinking through the practical risks and applications of AI. There are obvious themes related to safety and soundness for our industry, but in many cases financial institutions already have a foundation in place to address potential concerns. It’s clear that the government is supportive of the developing technology of AI and understands it will be a transformative technology in use by every bank and credit union.
A key concept to think through is “explainability” (see this recent Financial Brand story). Can you explain the outcomes of AI when it directly impacts or communicates with clients or prospects? For those familiar with generative AI (ChatGPT, Microsoft Copilot, as examples), it’s clear that it often operates as a “black box”. These tools provide outputs without a transparent roadmap or details leading to outcomes or decisions. This lack of transparency is problematic in a highly regulated industry. Discussing a lack of transparency with deposit or loan rate decisions is not a conversation anyone in the industry wants to have with their regulator.
Banking Transformed Podcast with Jim Marous
Listen to the brightest minds in the banking and business world and get ready to embrace change, take risks and disrupt yourself and your organization.
One Thing Every Financial Marketer Must Put in Their Budget Right Now
To achieve your growth goals in the year ahead, you'll need to find big ideas and unleash new innovations. But you should start building your budget here first.
Read More about One Thing Every Financial Marketer Must Put in Their Budget Right Now
Nearly all financial institutions at this stage of generative AI’s growth are unlikely to create AI solutions that are making rate, risk, or other direct client or prospect decisions. You will not have an explainability problem with everyday AI usage, especially when using established policies, procedures, and workflows.
In contrast to the “black box” nature of generative AI, other forms like Robotic Process Automation (RPA) and machine learning (ML) offer greater depth of explainability. RPA, essentially a human-programmed process, allows for easy testing and back-testing to ensure intended outcomes. ML, while more advanced, provides insights and probabilities into outcomes, facilitating testing and verification. Both RPA and ML provide significantly higher levels of explainability through processes and results compared to AI. This enhanced clarity is crucial for risk management and regulatory oversight.
RPA and ML provide stronger evidence of how decisions are made and how outcomes are reached, which is a necessity in the heavily regulated financial industry. It is important to ensure you can demonstrate clear awareness and knowledge of how RPA and ML outcomes happen, whether you’re creating these processes, or a third party creates them. You will be held accountable for the outcomes even if your vendor does the work.
The concept of a human-in-the-loop is vital to understanding how to responsibly move forward and begin using generative AI. This concept means that a human, one of your employees, is actively engaged in creating, iterating, and improving AI-created content. By keeping a human in the loop, you can ensure that you are taking responsibility for your output, and that it complies with legal, regulatory, and brand standards. Whether or not a human or AI created your content, AI is never the last step.
It’s unlikely that most financial institutions will implement self-built AI solutions, or solutions directly engaging with clients and prospects without a “human in the loop.” Especially solutions that that focus lending, rates, or other highly regulated and scrutinized topics. Banks and credit unions should focus their efforts on everyday AI solutions that have a meaningful impact productivity, creativity, and skill based gains. When there is a human in the loop you can share with regulators a level of accountability that is not different than human only created outputs.
Building Foundational Policies and Procedures
The initial step towards integrating everyday AI within your financial institution involves updating relevant policies. Most likely, you already have an IT usage policy in place, often directing employees to refrain from uploading Personally Identifiable Information (PII), using tools responsibly and ethically, or not allowing usage of unapproved solutions. Existing polices form a solid foundation for responsible AI usage, making it relatively easy to incorporate AI-specific guidelines. The key lies in being explicit about the do’s and don’ts, and clearly defining the guardrails for your employees.
Though a standalone AI usage policy could be considered, the future of AI points towards its widespread integration across numerous solutions and roles within the institution. Consequently, it’s more likely you’ll incorporate acceptable AI usage into multiple policies. For instance, you might have a marketing policy that addresses issues like bias and compliance with lending laws and regulations. While AI might seem implicitly covered under this policy, explicit inclusion is beneficial.
Creating an AI usage policy isn’t difficult. A simple prompt like the following can get you started:
Prompt: Act as a compliance director at a financial institution. The objective is to write a policy on acceptable AI usage. Use a tone that is direct and helpful for an audience of employees. Make it clear that only IT and Risk approved AI solutions are used. Format with sections written in paragraph form on these topics. 1) Privacy and not uploading PII into AI solutions unless explicitly approved. 2) Bias and that all output used should be read and edited to ensure that bias is corrected in final output. 3) Fact checking, all facts should be fact checked and cited. 4) Hallucinations do happen, and your editing process will correct issues. Use keywords that reflect AI is part of the process and users need to take ownership of the output.
The output generated from this prompt can serve as a first draft. You can further refine it by uploading your IT Usage Policy and asking AI to integrate it.
Prompt: I need you to take my model AI Usage Policy and integrate it into the IT Usage Policy so I have one cohesive policy.
This process, leveraging AI to create an AI policy, or updating your traditional policies, is a straightforward way to demonstrate productivity gains that AI offers. Updating your policies to account for AI reinforces a human-in-the-loop.
Julie Redfern, Lake Ridge Bank Chief Banking Officer says, “When we use AI, our final output follows the same process and policies as if it were created by a human. That means marketing content is reviewed by a subject matter expert, marketing, and compliance. Internal documents are also reviewed by a subject matter expert. AI is never the last step. AI helps us work better, it doesn’t do all the work for us.”
Julie’s insights remind us that existing policies and procedures can be effectively updated for everyday AI usage while avoiding many common pitfalls. Focusing on the human-in-the-loop means it’s clear to employees they are taking responsibility for any AI output. The goal is to foster a culture of responsible AI usage, ensuring that your bank or credit union benefits from the technology while mitigating potential risks.
Fostering Innovation and Responsible Use to Transform Work
The initial step into AI for most financial institutions typically involves forming a pilot group. This group should represent a cross-section of your institution; the application of AI varies greatly across different roles and individuals. Each person will interact and utilize it differently to enhance their productivity, creativity, and skillset.
A common mistake executive teams make is issuing an IT policy approving AI and then assuming organization-wide adoption or pushing out institution-wide use cases. The executive team’s role should be to set guidelines and guardrails and then get out of the way. They need to foster sharing of successes and use cases. The reason for this is that AI is most effective when it helps each employee find ways to be more productive, creative, or improve their skills.
There are some common barriers to effective growth with AI. Think critically about these points to ensure you’re fostering a growth mindset. You need to promote usage and success to collectively help employees improve.
• Trust in Employees: There is often a reluctance to trust employees with AI, which is surprising considering these same employees are already entrusted with handling highly sensitive information.
• Blocking ChatGPT: Attempts to block or shut down ChatGPT are futile, as employees will find alternative AI solutions like Copilot or Gemini. Or they’ll use AI solutions on their own computer. Without established guardrails, enterprising employees will continue to utilize these tools regardless.
• Realistic Expectations: It’s crucial to be realistic about AI usage. Top performers naturally seek ways to further enhance their performance, while less motivated employees may find them useful to work less. If this is your institution, you’re missing out on collectively improving how your employees work.
Training: A Key Component
Providing training is essential for two primary reasons:
1. Regulatory compliance: Regulators will inquire about how you’re training your team on responsible AI usage. While explicit regulatory expectations might be lacking, the existence of such training is crucial. This principle applies to any risk-based topic within your financial institution, whether it’s CIP, fair lending, or OFAC. Apply this principle to AI.
2. Skill development: AI is a new type of software, unlike anything previously encountered. To unlock transformative gains, employees need hands-on experience with AI. It’s analogous to understanding a person’s personality—it takes time and interaction. Providing opportunities to build skills and share results is vital for successful AI adoption.
Julie Zeimet, Lake Ridge Bank VP Digital Banking and Payments shares, “It took me a few months to truly figure out how to interact with AI. At first, I treated it like a search engine. When I finally had a mind shift and treated it like a person, that’s when it clicked, and I saw amazing results. I’d simply ask it to do something, then have a conversation with it to refine each output. Literally just like collaborating with a coworker.”
Jumpstarting the use of AI for transformative outcomes requires time, a dedicated pilot team utilizing AI daily, and a mindset unburdened by traditional banking’s black-and-white thinking. Employees using AI smartly and responsibility will uncover truly transformative opportunities. There’s no instruction manual—your team needs to write it, propelling your financial institution into the future.
Addressing Landmines and Fostering Success
Achieving success with AI takes time and a willingness to learn its intricacies. Daily usage is the quickest path to improvement, but it’s important to be mindful of potential pitfalls along the way before you start your AI journey.
Julie Redfern notes, “When we started to train our associates in a hands-on lab, it became immediately apparent that each associate quickly finds ways to apply AI to routine tasks. It told us that we had to set guidelines, but then get out of the way and let associates learn and share all the creative ways they can work better with AI. This encouraged associates to start with AI and use AI every day.”
Landmine 1: Overreliance
While AI is remarkable, it’s crucial to remember that a human must always remain in the loop. All AI-generated output should be thoroughly edited, reviewed, and fact-checked to ensure it’s appropriate for your needs. Avoid becoming overly reliant on AI; maintain a healthy balance between human judgment and machine assistance.
Landmine 2: Lack of a Clear Expectations
Without well-defined AI guardrails, expectations, and policies, employees may resort to using AI in the shadows. This doesn’t imply irresponsible use, but rather a lack of transparency that hinders knowledge-sharing and collective growth. Assume AI is already being used at your institution and encourage those users to come forward and share their knowledge.
Landmine 3: Individualistic Approach
While AI usage often centers on individual productivity, true gains in productivity, creativity, and skill development stem from understanding AI’s potential and applying it to everyday tasks. Everyday AI starts at each employee’s desk. The executive team should set boundaries and encourage responsible AI use through promotion, recognition, and knowledge-sharing.
The Way Forward
The prevailing sentiment is that AI won’t replace employees, but employees using AI will replace those who don’t use AI. By establishing policies, guidelines, and a clear roadmap, you can quickly realize significant gains from daily AI usage. Maintain a human in the loop, leverage your professional skills for content creation and review, and foster an environment where AI is seen as a tool for empowerment and growth. You are closer to implementing responsible, and transformational, usage than you think.