Why Banking’s AI Future Depends on Trust, Not Just Technology

By Jeff Whiteside, Sr. Director of Security, Risk Governance, Chime Financial, and Ajish Abraham, VP of Infrastructure and Security, iBusiness Funding

Published on September 24th, 2025 in Artificial Intelligence

Simple Subscribe

Subscribe Now!

Stay on top of all the latest news and trends in the banking industry.

Consent Granted*

Executive Summary

  • As AI models become more sophisticated, they are also becoming more unpredictable and vulnerable. Generative AI introduces entirely new ways to be attacked, including prompt injection and data leakage, synthetic identity creation and phishing at scale.
  • These are challenges that financial institutions must address through increased operational controls, governance structures and strategic oversight – and they must do so before trust, the industry’s foundational currency, is compromised.
  • In response, the Financial Services AI Council (FSAIC), an industry working group, is helping shape the Banking AI Control Standards (BAICS) – a purpose-built framework to help adopt AI securely, one that is specifically tailored to the regulatory, operational and risk management realities faced by banks and credit unions.

Financial institutions are racing to integrate artificial intelligence into everything from customer service to fraud detection, and the pace is outpacing the frameworks designed to manage it. Currently, over 80% of financial institutions leverage AI within operations to create efficiencies and deliver a more personalized experience, according to Jack Henry.

Despite its immense promise, AI introduces a complex new risk layer – one that current federal and state regulatory structures are not fully prepared to address let alone govern. The banking industry operates under some of the strictest compliance and data integrity mandates of any sector – and justifiably so. In this environment, the reliability, transparency and auditability of AI systems aren’t nice-to-have features; they are regulatory and reputational imperatives.

Want more insights like this? Check out Candescent’s content portal: Illuminating Insights in Digital-First Banking

The Urgent Need for Standards

Yet, as models are becoming more sophisticated – especially with generative AI and large language models (LLMs) – they are also becoming more unpredictable. In fact, generative AI introduces entirely new ways to be attacked, including prompt injection and data leakage, to synthetic identity creation and phishing at scale. Without clearly defined standards, the same technology that streamlines operations can expose institutions to reputational harm, financial losses, regulatory violations, system vulnerabilities and data breaches.

An increase in incidents highlights the urgency that financial institutions face when it comes to implementing protections in place. Deepfake fraud targeting financial services rose 700% in a year, according to identity verification platform Sumsub. Deloitte’s Center for Financial Services estimated that generative AI-enabled fraud could increase from $12.3 billion in 2023 to $40 billion by 2027.

This is not a speculative risk. It is a challenge that financial institutions must address through increased operational controls, governance structures and rigid, strategic oversight with speed before trust, the industry’s foundational currency, is compromised.

-- Article continued below --

Adopting a Responsive and Adaptable Framework

In response to this growing need, the Financial Services AI Council (FSAIC), an industry working group, is helping shape the Banking AI Control Standards (BAICS) – a purpose-built framework to help the financial services industry adopt AI securely, responsibly and in compliance with regulatory expectations. Unlike general-purpose AI guidance and best practices, BAICS is specifically tailored to the regulatory, operational and risk management realities faced by banks, credit unions and companies within the financial services industry.

The framework organizes controls across eight key domains:

  • runtime and infrastructure security
  • data governance and transfer controls
  • access and authentication
  • prompt management
  • output risk management
  • model lifecycle management, and
  • feature security.

Each domain is intended to ensure AI systems operate within a secure environment, that data is handled according to compliance expectations, and that AI-generated outputs are continuously monitored and validated with consistency and accuracy. Problems often span multiple domains, supporting the new reality that effective AI security requires depth and breadth. There’s no single safeguard to prevent all the potential AI risks. Instead, a layered approach gives the assurance that, if one defensive element fails, another is there to prevent catastrophe.

For example, blocking an internal threat actor from taking advantage of stolen credentials or compromised accounts requires stronger access and authentication measures, as well as runtime and infrastructure security controls. Role-based authorization ensures that hackers can’t exploit accounts with overly wide permissions, for instance, while rate limiting can limit automated scraping.

-- Article continued below --

A Shared Common Framework Delivers Efficiency

As technology changes at a rapid pace, BAICS is a security-first, industry-aligned commitment to help ensure AI delivers efficiencies, not vulnerabilities, to financial institutions. Its framework aligns regulatory mandates with the practical realities of running AI systems in a bank and encourages banks to share their knowledge and learnings. This not only gives financial institutions a proactive, accountable and standards-based path to responsible AI governance but also encourages faster deployment of AI-driven services with confidence as a result of following industry-best practices.

It’s clear that new and enhanced security and governance protocols are necessary. Despite the relatively nascent nature of the technology, hackers are already using AI to target potential victims with greater force, speed and precision.

For example, as Anthropic just disclosed, bad actors or rogue insiders are increasingly able to manipulate LLMs for nefarious purposes. Combatting these new, emerging attack tactics requires controls like input validation and prompt filtering to remove suspicious instructions. Meanwhile, role-based prompt restrictions can prevent user-level overrides, limiting the attack surface for hackers. Anomaly detection can also constantly monitor for any suspicious attempts to access sensitive information.

However, issues like model drift and degradation inject just as much risk. They can gradually lower the accuracy of AI agents in charge of critical tasks, like credit risk scoring, leading to operational and repetitional harm—like denying someone credit when they’re actually eligible, or approving too risky of a loan. It’s why the BAICS recommends continuous performance monitoring of the AI systems, with regular training and recalibration, along with validation checks to prevent “silent” risk buildup.

Hallucinations remain a key challenge to scaling the use of new AI systems as well. AI assistants might cite non-existent regulatory clauses or confidently give inaccurate or misleading advice. Robust prompt design can add additional safeguards, while retrieval augmentation is an effective way for organizations to make sure AI agents are referencing the most factual, up-to-date information. And ultimately, if hallucinations continue, the AI systems might require retraining on more domain-specific data.

Just as financial institutions have worked together to establish the Financial Services Sector Coordinating Council (FSSCC) for cybersecurity standards and both the International Organization for Standardization (ISO) and the American Institute of Certified Public Accountants developed cloud security standards, a similar approach is emerging for AI. The frameworks addressed by FSAIC reflect an industry-led effort to establish shared norms that reduce duplicative efforts, speed up deployment and bolster compliance readiness.

For AI to fulfill the high expectations and potential within the financial services industry it must be built on a foundation of accountability, security and trust. Continuing with the currently fragmented, ad hoc approach to AI governance risks will only undermine innovation and the confidence of regulators, shareholders and customers. Adopting a framework like BAICS is a critical first step – outlining to banks the structure they need to scale AI safely while protecting what matters most: their customers. The time is now for executive and IT leadership from across the industry to unite around a common standard that advances both innovation and integrity while fostering a deeper trust in the future of banking.

Disclaimer: Written by authors as members of the Financial Services AI Council (FSAIC); the views represented do not necessarily reflect those of their respective employers.

The Financial Brand is your premier destination for comprehensive insights in the financial services sector. With our in-depth articles, webinars, reports and research, we keep banking executives up-to-date with the latest trends, growth strategies, and technological advancements that are transforming the industry today.

© 2026 The Financial Brand. All rights reserved. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of The Financial Brand.