Is That Your Boss or a Deepfake on the Other Side of That Video Call?

By Ram Srinivasan and Vijay Jesrani at Jones Lang LaSalle

Published on October 8th, 2025 in Banking Technology

Simple Subscribe

Subscribe Now!

Stay on top of all the latest news and trends in the banking industry.

Consent Granted*

Executive Summary

  • Video conferencing became everyday business in short order thanks to the pandemic. But now the question is, is that person on your call real or an AI deepfake?
  • We’ve grown used to doing many business transactions virtually, even huge deals. But in a world of potential phony faces some institutions are already mandating in-person meetings for major matters.
  • One source of possible risk: AI that employees tap on their own to get their work done, putting bank systems in danger when they upload.

In an era when artificial intelligence seamlessly integrates into every corner of business operations, financial services organizations find themselves navigating an unprecedented convergence of digital innovation and physical reality. As AI grows more sophisticated and more capable of simulating human presence, we’re reminded of the irreplaceable value of authentic human connection.

Deepfake Deception: Corporate Victims in the Crosshairs

The cybersecurity landscape has fundamentally shifted as AI becomes both enabler and weapon. Deepfakes represent one of the most immediate concerns for financial services companies and for financial transactions. Fraudsters can now create convincing audio and video impersonations of executives or customers to bypass authentication protocols, authorize fraudulent transactions, or gain access to sensitive systems.

These attacks exploit the human element in security, making traditional verification methods insufficient.

An example: In early 2024, employees at a global engineering firm participated in what appeared to be a routine video conference with senior management. Following instructions from leadership, an employee transferred $25 million in company funds.

Except the leadership team was entirely fabricated.

Sophisticated deepfake technology had perfectly replicated not just the appearance but the mannerisms and decision-making patterns of the company’s executives. The real managers were elsewhere, unaware their digital twins were orchestrating one of the largest deepfake heists in corporate history.

This reflects a terrifying trend of AI fraud that is shaking the financial services industry. Deepfake-enabled attacks have grown by an alarming 1,740% in just one year, representing one of the fastest-growing AI-powered threats. More than half of businesses in the U.S. and U.K. have been targeted by deepfake-powered financial scams, with 43% falling victim. For businesses that fall victim, experts note that damages can reach 10 to 15% of annual revenue.

Read more: Why Banking’s AI Future Depends on Trust, Not Just Technology

-- Article continued below --

The Authenticity Response: A Return to Physical Presence

As such attacks grow more common, a counterintuitive trend is emerging. Leaders who once championed remote work are now advocating for increased face-to-face interaction as a strategic response to its misuse — not as a retreat from technology.

Mark Cuban, the billionaire investor and entrepreneur, predicted in June 2025: “Within the next three years, there will be so much AI, in particular AI video, people won’t know if what they see or hear is real. Which will lead to an explosion of face-to-face engagement, events and jobs.”

Cuban’s insight recognizes a fundamental paradox: As our digital tools become more lifelike, authenticity itself becomes the scarcest resource. This isn’t about abandoning remote work or rejecting AI. It’s about strategic discernment in how we deploy these tools.

Many financial services companies are at the forefront of AI technology, leveraging AI across their services, risk and revenue functions, and deploying tools such as generative AI for personalized recommendations. Banks use AI in their physical workplaces primarily for security through facial recognition systems, behavioral analytics, and fraud detection at ATMs, while also enhancing customer experience with digital assistants, queue management, and personalized services. It’s imperative that these institutions consider implications and security challenges of AI as it relates to both the office and retail physical workplace.

Beyond Technical Risks: The Erosion of Digital Trust

The deepfake threat extends far beyond immediate financial losses. Each successful attack erodes the foundation of digital communication itself. When employees can no longer trust that their CEO is real during a video call, the entire remote work infrastructure becomes suspect in particular for financial institutions, which deal in the currency of trust.

AI models can reflect bias, make incorrect predictions, or create outcomes that lack transparency. But deepfakes attack something more fundamental: the assumption that digital communication represents authentic human intent.

For organizations like banks and credit unions, when trust extends to digital channels that can be convincingly spoofed, the implications reach far beyond cybersecurity into social capital and institutional credibility.

In addition, privacy and regulatory compliance concerns intensify with AI deployment. Financial institutions must navigate complex rules, like financial privacy laws, while ensuring AI systems don’t inadvertently expose customer data or create discriminatory outcomes that violate fair-lending practices. Financial services companies must balance innovation with compliance and security concerns that extend across the digital and physical workplace and platforms.

Read more: How to Stop Three AI Threats Changing the Face of Identity Fraud — Literally

The Greatest Danger of All, From Within: The Shadow AI Crisis

Perhaps the most insidious risk comes from “shadow AI,” unauthorized employee usage of AI tools that creates unexpected vulnerabilities. Shadow AI document uploads have increased by 156%, according to a recent study. Recent research from IBM and the Ponemon Institute indicates that data breaches from shadow AI use cost organizations an average of $4.63 million.

Employees increasingly turn to readily available AI tools for everything from drafting emails to analyzing data, often without understanding the security implications. Each interaction potentially exposes sensitive company information to third-party AI services, creating compliance nightmares that traditional cybersecurity frameworks weren’t designed to handle.

The root cause here is lack of awareness, insufficient cybersecurity training, and a significant skill gap. And this has implications across the spectrum from clicks to bricks. The complexity of securing smart building systems mirrors the broader challenge of maintaining trust in digital communications as they become increasingly vulnerable to sophisticated manipulation.

Financial services companies must implement comprehensive AI governance frameworks, continuous monitoring systems, and robust incident response plans to address these evolving threats while maintaining operational efficiency and customer trust. These systems and protocols must extend not only within their front office but to their back office, including vendor management and third-party suppliers who manage their data.

Read more: The AI Advantage: How to Build a Future-Ready Workforce with Smarter Training

The Discernment Revolution: Not Going Back, Moving Forward

All this said, the response to deepfake proliferation isn’t a wholesale retreat to pre-digital operations. Instead, it requires “strategic discernment,” the ability to distinguish between interactions that can safely occur in digital spaces and those requiring physical verification.

Over the past few years, organizations optimized for distributed, remote and hybrid work. Now they’re optimizing for AI-augmented workflows while simultaneously developing new frameworks for authentication and trust.

Of course, not every conversation needs to happen in person. But some will, not because the technology isn’t sophisticated enough, but because trust, intuition and connection don’t always translate through potentially compromised digital channels. High-stakes decisions, sensitive negotiations, and critical financial transactions may increasingly require physical presence as the ultimate authentication method.

Major corporations are already adapting. Some financial institutions now require in-person verification for large transactions above certain thresholds. Consulting firms are developing protocols that mandate face-to-face meetings for critical client decisions.

-- Article continued below --

Building Resilience at the Intersection

No longer defined by the dominance of either digital or physical interactions, the future workplace will be defined by the intelligent integration of both. Success will depend on organizations’ ability to harness AI’s transformative potential while building comprehensive defenses against its misuse.

This includes investing in advanced authentication systems that can detect deepfakes, implementing robust data governance policies that address shadow AI usage, and maintaining human verification processes for critical transactions. But perhaps most importantly, it requires cultural adaptation, helping employees and customers understand when digital convenience must yield to physical authenticity.

JLL research shows that firms are employing digital techniques to build emotional connections, creating a sense of place and belonging. Consider a global bank using “digital projection mapping” to create “memorable shared moments” where “the workforce can physically gather.” and global banks reinventing the retail customer experience to focus on applying solutions to life events.

In a world where anything on a screen can be a deepfake, authenticity becomes the most valuable resource we have. This authenticity manifests in presence, in timing, and in intent. What organizations choose to value, and how they choose to show up in both digital and physical spaces, will define the next chapter of the future of work.

The issue is not a binary choice between technology and humanity. Instead, it’s a matter of forcing a more sophisticated understanding of when each is most appropriate. Organizations that master this balance between AI and authentic human connection will thrive in tomorrow’s financial services workplace.

About the Author

Ram Srinivasan is a globally recognized AI strategist, MIT-trained technologist, and author of "The Conscious Machine: From Artificial to Enlightened Intelligence." As managing director and global AI adoption leader at JLL, he has shaped innovation and digital transformation strategies for global enterprises. Vijay Jesrani is a managing director and Americas financial services sector lead with JLL's consulting practice. He works with global, national and regional financial services companies to transform their workplace, drive cost savings, improve productivity and create high-performing corporate real estate teams.

The Financial Brand is your premier destination for comprehensive insights in the financial services sector. With our in-depth articles, webinars, reports and research, we keep banking executives up-to-date with the latest trends, growth strategies, and technological advancements that are transforming the industry today.

© 2026 The Financial Brand. All rights reserved. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of The Financial Brand.