Why the AI Revolution Is Being Led from Below

The thoughtful and measured approach that many banking leaders articulate in their corporate AI strategies, which emphasize "guardrails" and risk management, may soon be overthrown by frontline and back office employees whose use of AI tools is both proliferating and invisible.

LAS VEGAS Revolutions are rarely led by the existing leadership class, and the AI revolution in banking is proving to be no different.

When GenAI first burst on the scene and its earth-shattering implications became clear many banks opened their longstanding corporate change management playbooks. They presumed that decisions about AI strategy would be made at the highest levels, as part of an enterprise-wide strategic technology plan. IT departments saw clear opportunities in terms of new money and new leverage. Risk teams focused immediately on the need for controls and rules. Consultants and researchers urged specialized teams and controlled experiments, cognizant of everything from customer wariness to the impact of biased datasets.

And at many institutions today, the official AI strategy work is proceeding this way following detailed blueprints and defined guardrails.

But, meanwhile, in back offices and on the front lines, a very different AI revolution is racing ahead, one that’s both widespread and largely invisible. The impact of this quiet revolution and the ability of banks and other institutions to shape or control it may be both profound and as-yet unknown.

Call It ‘Shadow AI’

“Shadow AI” refers to the increasing prevalence of AI tools operating in the background, often without the knowledge or understanding of the organizations that employ them. These AI tools come embedded in various applications, platforms, and services, and they can be deployed against decision-making processes, recommendations, and outcomes in subtle but significant ways.

This “shadow AI” clearly mimics some aspects of the more familiar “shadow IT”, in which departments and teams deploy digital tools without the involvement, supervision or seven approval of corporate IT. Marketing organizations were (and still are) rightly notorious as a main source of “shadow IT”.

But shadow AI differs from shadow IT in important ways. If shadow IT was largely defined by some teams’ use of unauthorized vendors and platforms, shadow AI is often driven by the use of AI tools like ChatGPT by individual employees and users, on their own and even surreptitiously. And as many of these tools are free and web-based, their use often leaves little or no traces. Meanwhile, remote work compounds both the opportunity for employees to use their own AI tools, while reducing corporate visibility and oversight.

So why is that a problem? The proliferation of Shadow AI can deliver many of the same benefits as officially sanctioned AI strategies, streamlining processes, automating repetitive tasks, and enhancing productivity. Employees are mainly drawn to deploy their own AI tools for precisely these reasons they can hand off chunks of taxing work to these invisible assistants.

Some industry observers see the plus side of all this and are actively encouraging the “democratization” of AI tools. At this week’s The Financial Brand Forum 2024, Cornerstone Advisors’ Ron Shevlin made it his top recommendation: “My #1 piece of advice is ‘drive bottom-up use.’ Encourage widespread AI experimentation by your team members. Then document and share the process and output improvements as widely as possible.”

In some ways, his advice is simply a recognition of a simple reality: “Face it: You probably have no clue what your people are already doing.”

Much of this shadow AI is driven by the proliferation of AI “co-pilots” that come embedded in the tools that your people use every day Microsoft Office, Microsoft and Google browsers as well as extensions and web apps that can be adopted easily and quickly. This wave of co-pilots is transforming the way employees work in various ways:

Task automation: AI copilots can automate repetitive and mundane tasks, such as data entry, document formatting, scheduling appointments, and generating reports. This frees up employees’ time and resources.

Productivity enhancement: By providing instant access to relevant information, data analysis, and recommendations, AI copilots assist with research, summarize lengthy documents, generate drafts or outlines, and provide context-specific suggestions, reducing the time and effort required for various tasks.

Collaboration and knowledge sharing: AI copilots can facilitate collaboration and knowledge sharing within teams or organizations. They can assist in taking notes during meetings, summarizing discussions, and providing relevant information or context to team members, ensuring that everyone is on the same page and reducing the risk of information silos.

Personalized assistance: AI copilots can learn individual preferences, working styles, and domain-specific knowledge, enabling them to provide personalized assistance tailored to each employee’s needs, including customized recommendations, reminders, and task prioritization, enhancing overall efficiency and productivity.

Skill augmentation: AI copilots can augment employees’ skills and capabilities by providing on-the-job support, guidance, and training. They can suggest best practices, offer alternative approaches, and provide real-time feedback, enabling continuous learning and skill development.

Read more:

Decision support: AI copilots can analyze large volumes of data, identify patterns, and provide insights to support decision-making processes. They can present relevant information, highlight potential risks or opportunities, and offer recommendations, helping employees make more informed and data-driven decisions.

Adaptive workflows: AI copilots can adapt to individual workflows and preferences, seamlessly integrating into existing tools and applications used by employees.

Sounds good. But at the same time, the proliferation of these tool multiples questions and risks. For example: What data and inputs are your employees using? If their AI tools are tapping into external and public information sources, are those sources reliable?

The proliferation of Shadow AI amps up the issues and risks associated with AI generally:

  • Lack of transparency and accountability: The hidden use of shadow AI tools can create issues of transparency, accountability, errors in decision-making processes, and quality of output.
  • Privacy and data protection: Shadow AI tools often rely on third-party data, raising additional privacy concerns and potential violations of data protection regulations.
  • Regulatory and governance challenges: The rapid development and deployment of shadow AI systems may outpace existing regulatory frameworks and slip past internal oversight.
  • Explainability and interpretability: Many AI systems, particularly deep learning models, operate as “black boxes,” making it challenging for employees to understand and later explain their inner workings, decisions, and outputs.

As They Embrace AI Tools, What Are Your Employees Now Doing All Day?

Most vexing for some managers is the question of time and productivity: If your people are using AI tools to do work faster in the case of things like content creation and data analysis, orders of magnitude faster what are they doing with the time and energies that are freed up? Are they becoming more productive and focusing on the “value add” activities best suited to humans or are they just taking the rest of the day off?

This leads to one of the most important distinctions between shadow IT and shadow AI; Shadow IT was typically driven by teams’ (often laudable) desires to do their jobs better with tools or platforms they preferred over officially sanctioned options. In contrast, the use of shadow AI often benefits the employee personally, by allowing them to do their jobs faster and easier.

Hence Shevlin’s second recommendation: “If you are allowing bottom-on experimentation, you need to manage risks from top down.”

How? His recommendations include:

  • Establish an AI risk management team.
  • Evaluate risks of AI models.
  • Provide fraud and cybersecurity education.
  • Assess impact of regulatory actions.
  • Develop a data governance policy.

All of which sounds good and right. But will these measures prove to be effective in addressing and controlling employees’ personal AI strategies, as they individually redesign and refine their own workflows and habits, openly or otherwise?

We will all find out soon enough.

This article was originally published on . All content © 2024 by The Financial Brand and may not be reproduced by any means without permission.