Your AI Technology Partner Could Be a Security Trojan Horse
By Charles Gorrivan and Victor Swezey, Contributors at The Financial Brand
Simple Subscribe
Subscribe Now!
Executive Summary
- The use of AI is proliferating rapidly among financial institutions, driven by the wide availability of SaaS and PaaS AI solutions which allow banks to deploy AI quickly without requiring the time and expense of their own development.
- But in their rush to add AI capabilities to their offerings, many providers may not be sufficiently focused on the security vulnerabilities of their solutions. Those weak points become embedded in their customers’ own systems, undetected.
- To protect themselves, financial institutions must move beyond standard procurement practices and vendor selection to deeply vet the security posture of their outsourcing partners, and institute ongoing security checks once AI solutions are deployed.
Third-party software vendors have become critical infrastructure for the financial sector, but they are also an escalating source of cyber risk. As banks outsource everything from data storage to customer service, attackers are targeting the external providers whose platforms connect institutions across the industry.
This network of software-as-a-service (SaaS) and platform-as-a-service (PaaS) providers, which range from industry giants like Amazon and Microsoft to fast-moving startups, has allowed financial institutions to be more agile and adopt new capabilities. They have become especially critical as financial institutions rush to implement the latest tools powered by artificial intelligence.
But cybersecurity experts warn that this shift has created new points of vulnerability, particularly as AI features are quietly added into vendor platforms with limited oversight. In most cases, banks assess vendors during the procurement stage, relying on a “checkbox” approach of certifications and compliance questionnaires, according to Fabio Colombo, a cybersecurity leader in the consulting firm Accenture’s EMEA division.
Yet banks often fail to monitor how those platforms behave once embedded in their systems. The results: increasing complexity, decreasing visibility, and a widening attack surface.
Security leaders at financial institutions are increasingly sounding the alarm. In a public warning in April, JPMorgan Chase chief information security officer Patrick Opet published an open letter arguing that the software-as-a-service model underpinning modern banking is quietly eroding decades of security architecture and “weakening the global economic system” as it creates systemic risks with few safeguards.
“At JPMorganChase, we’ve seen the warning signs firsthand,” Opet wrote. “Over the past three years, our third-party providers experienced a number of incidents within their environments.”
One such incident, disclosed by JPMorgan last year, exposed the personal data of more than 451,800 customers, according to a filing with the Maine attorney general.
The more banks rely on third-party services, the less trust they can expect from customers, according to a 2024 study by Accenture. While 81% of consumers trust their main bank to keep their data safe, less than half trust outside technology providers, the study says.
“Usually, a global bank has thousands of third parties, and the problem doesn’t stop there,” says Colombo, one of the authors of the study. “You have the fourth party, the fifth party, the seventh party.”
While there has yet to be a high-profile breach tied specifically to artificial intelligence, many experts believe the risks are escalating fast. And when breaches do occur, the results could erode consumer trust in financial institutions. According to Accenture, 62% of customers have less confidence in their bank after a breach, while 43% stop engaging with their banks completely.
With the potential fallout of a cyberattack in mind, experts say banks should be focused on keeping their third-party security as robust as possible — before they are forced to face the reputational consequences.
Dig deeper:
- Fintech’s AI Obsession Is Useless Without Culture, Clarity and Control
- Risk Management, Not Regulation, Should Fuel AI Adoption in Banking
- Your Customers Are Already Deploying AI Agents. Are You Ready to Respond?
AI Complicates Old Rules
As generative AI changes the threat landscape and malicious actors become more advanced, banks may need to modify their monitoring procedures to keep up. The traditional methods that banks use at the point of sale do not account for how those tools are used — or misused — after deployment. “It’s OK, but it’s not enough,” says Colombo.
One challenge for the financial sector may be that responsibility for third-party breaches is often misunderstood, according to Brian Soby, chief technology officer and co-founder of AppOmni, a security company focused on SaaS environments. A large-scale cyber attack last year involving Snowflake, a major cloud data platform, illustrates the problem. Hackers didn’t breach Snowflake itself. Instead, they used stolen usernames and passwords — many found on criminal forums — to access customer accounts that lacked basic protections like multifactor authentication. Reported victims included Santander, LendingTree, and AT&T.
“You need to worry about how your users are configuring” third-party systems, says Soby. “It’s the banking customer’s responsibility to configure the environment for their employees.”
As banks embed more AI-driven tools into their workflows, the potential for misconfiguration or misuse only grows. Nearly every software platform — from infrastructure to customer service — is now rolling out embedded AI features. Often, these tools are added by default or introduced through free trials and small applications that never pass through formal review processes.
“There’s sprawl happening, because every single application that exists is trying to incorporate AI,” Soby said. Security teams may not even know which products now include AI, what those systems are capable of doing, or whether they’ve been configured securely.
Even when banks attempt to limit AI, it’s not always possible. Chris Camacho, co-founder and chief operations officer of Abstract Security and a former cybersecurity executive at Bank of America, said some financial institutions have asked vendors to turn off AI features entirely — often without success. “You can’t just shut it off,” Camacho said.
One specific concern on the rise is AI agents — autonomous tools layered onto existing software that can perform real actions, like retrieving payroll data or initiating financial transactions. If not properly secured, these agents could be manipulated to take actions their creators never intended.
“I’ve seen customer service-related agents where you can basically Jedi-mind trick them,” Soby said. “If my seven-year-old was sitting there with credit cards and she could go buy stuff, that would be a huge problem, because anybody could trick her into buying them stuff.”
Those vulnerabilities aren’t easily addressed with traditional tools like firewalls. “We’re dealing with a different type of technology that’s non-deterministic,” Soby said. “It’s not hard and fast rules and configuration. That’s a lot harder.”
Camacho pointed to data leakage as another growing risk, especially as employees begin feeding proprietary content or sensitive queries into AI tools.
Sometimes users don’t know that the information they’re sending might contain unreleased financial data, or HR records, or trade-sensitive material, he said. Once that data enters an AI model — particularly one that learns across users — it’s difficult to control where it goes or how it’s used.
For both Soby and Camacho, the solution lies in continuous monitoring, not one-time reviews. Banks need tools that provide visibility into how third-party systems are using data and AI, and they need to know whether their internal users — or the platforms themselves — are exposing sensitive information.
Many tools on the market are leveraging generative AI themselves for risk evaluation according to Brian Brown, head of CISO engagement at cybersecurity firm Trellix. “We are using AI today to evaluate AI,” Brown says, pointing to Trellix Wise, an extension of the company’s threat detection platform that helps automates the process of separating high-profile threats from false positives.
Other tech-forward financial institutions are building their AI capabilities in house to avoid the risks that come with relying on a third-party provider. “We built a national, digital-first bank instead of retrofitting a traditional model. That means we own the tech, the customer experience, the data infrastructure, and the risk controls,” said a spokesperson at neobank Varo. “This end-to-end ownership gives us an unmatched level of visibility, speed, and control over emerging cybersecurity threats, including risks related to generative AI.”
So far, there hasn’t been a headline-grabbing third-party AI breach. But security leaders say the moment is coming.
“You really need to look at it in terms of the worst-case scenario, because we know that will happen at some point,” said Soby. “That’s how banks need to prioritize what to look at, and prioritize what they want to what they want to monitor, and how stringently they monitor it,”
But banks should not wait for a breach to start improving their communication with customers, to both build trust and train customers to avoid cyber risks, according to Brown. He says this communication could start with more transparency about a bank’s cybersecurity, including informing customers about the institution’s adherence to industry-standard practices.
“You have to achieve that delicate balance,” Brown said. “There has to be some concern about too much transparency, exposing the bank to increased risk, versus, being too closed off … and then leading to concerns of the black box mentality.”
