AI Arms Race: Banks and Fraudsters Battle for the Upper Hand

Deepfakes, spearfishing, FraudGPT: AI is accelerating the world of financial fraud at a dizzying pace. Can banks use AI to fight AI? And do they have the knowhow to implement the solutions?

One of the stranger moments in the 2024 presidential primary process took place this week: Voters in New Hampshire received a “robocall” featuring President Joe Biden’s voice, discouraging them from voting in the primary, and using Biden’s pet phrase “What a bunch of malarkey.”

But Biden didn’t record the call; it appears to be a “deepfake” generated by artificial intelligence (AI). Authorities are investigating the call as a potentially illegal attempt to suppress votes.

It’s not clear how many voters received the call or were genuinely deceived by it, but the episode illustrates how sophisticated the world of AI fraud has become—and banks are an increasingly popular target. Last summer, The New York Times published a story featuring instances in which customers’ voices were synthesized in an attempt to get bank employees to transfer money.

The phenomenon is so new that it’s hard to determine how widespread it is, and of course synthesizing voices is only one way that AI can be applied to financial sector fraud. Most experts think AI represents a small fraction of the $8.8 billion lost annually to financial fraud. But there is no doubt that the field is growing.

A survey last year asked 500 security and risk officers at lending institutions whether “synthetic fraud” had increased over the previous 24 months: 100% agreed that it had. An eye-popping 87% of institutions admitted that they had extended credit to a synthetic customer.

What makes the AI-generated voice component so troubling is that it can be added to a synthetic customer profile that might include a Social Security number, a credit rating, and other go-to badges of authenticity.

In Congressional testimony last year, Jeffrey Brown of the Social Security Administration’s Inspector General office, said: “Synthetic identity theft is one of the most difficult forms of fraud to catch because fraudsters build good credit over a period time using a fake profile before making fraudulent charges and abandoning the identity.” He cited a San Antonio-based bank that was targeted by a group of fraudsters who created 700 synthetic identities that were later used to siphon COVID relief money; at least two people were charged with federal crimes.

Moreover, once a fraud strategy has proven successful, AI can be used to multiply it, and even sell it to other fraudsters, a practice dubbed “cybercrime-as-a-service.”

Dig deeper: Why an Identity-Based Solution is Critical to Mitigate Bank Fraud

Turbocharging Older Forms of Fraud

Another recent AI development that keeps cybersecurity officers awake at night is the development of FraudGPT. Like a “black-hat” version of its more benign cousin ChatGPT, FraudGPT can generate writing for many kinds of malicious activity, from malware to convincing-sounding phishing email.

In many ways, the development of FraudGPT (and a related program called WormGPT) represents AI taking existing fraud methods to a new level. Andrew Davies, global head of regulatory affairs at ComplyAdvantage, explains that “CEO fraud” has been around for a long time: an employee receives a “spearphishing” email purporting to be from the CEO and asking for a password or a wire transfer.

Webinar
REGISTER FOR THIS FREE WEBINAR
Unlocking Digital Acquisition: A Bank's Journey to Become Digital-First
This webinar covers a comprehensive roadmap for digital marketing success, from building foundational capabilities and structures and forging strategic partnerships, to assembling the right team and more.
Wednesday, May 1st at 2pm EST
Enter your email address

But what programs like FraudGPT can accomplish is to make such efforts “much more credible.”

Moreover, he said, AI can be used to gather credentials — including voice — to make the fraud harder to detect. And so while the fraud channel stays the same, FraudGPT “democratizes” fraud by making sophisticated tools available to those with limited programming skills. And while these malware services charge for their use, some online cybersecurity forums also report the proliferation of ChatGPT “jailbreaks,” which would allow users to bypass ChatGPT’s restrictions against financial crime.

Read more on the battle against fraud:

Understanding The AI Arms Race

Few businesses, in banking and elsewhere, can say with confidence that they are prepared for cutting-edge, AI-driven fraud. And yet, like many technologies before it, AI creates a kind of arms race; many of AI’s capabilities can be used to fight fraud as well as enable it.

Most bank fraud detection involves the identification of anomalous behavior — does this requested transaction look like something this customer would typically do? At a very simple level, this has been around for decades, such as when someone racks up a gas station credit card bill 2000 miles away from the customer’s home address.

“The use of behavioral analytics and machine learning to find outliers continues to be really effective and is continuing to evolve.”

— Andrew Davies, ComplyAdvantage

What AI and machine learning make possible is a dramatic expansion of the data inputs used to measure what is anomalous. “The use of behavioral analytics and machine learning to find outliers continues to be really effective and is continuing to evolve,” Davies noted.

Related to this is the AI-charged development of predictive modeling. AI can learn from past instances of fraud to make predictions about the forms it is likely to take, thus giving banks and financial institutions the opportunity to add layers of protection and authentication. Last year, Mastercard announced a predictive AI tool to prevent fraud in real time payments, and that a pilot program with a UK bank saved £100 million.

Davies said that AI’s ability to cluster data allows financial institutions to look at anomalies around not only an individual’s behavior, but that of a cohort — e.g., how much does this transaction deviate from the standard of what someone who lives in this neighborhood and has this income range would do?

So far, so good, but…

Sophisticated fraud solutions are only as effective as a bank’s ability and willingness to deploy them. A ComplyAdvantage survey issued this month found that financial firms’ thinking about AI “often appears contradictory.” U.S. regulators have clearly stated they intend to scrutinize the “explainability” of AI systems, yet 89% of the firms in the survey said they would be willing to trade explainability for efficiency.

And there are other familiar barriers: A Featurespace survey issued last year found that while many financial firms recognized the value of investing in AI, they were concerned about cost and their own technical expertise.

James Ledbetter is the editor and publisher of FIN, a Substack newsletter about fintech. He is the former editor-in-chief of Inc. magazine and former head of content for Sequoia Capital. He has also held senior editorial roles at Reuters, Fortune, and Time, and is the author of six books, most recently One Nation Under Gold.

This article was originally published on . All content © 2024 by The Financial Brand and may not be reproduced by any means without permission.