Financial Institutions’ Unexpected Advantage in the Race for AI Leverage
By Nicole Volpe, Contributor at The Financial Brand
Simple Subscribe
Subscribe Now!
Financial institutions’ workforces are already using generative AI — a lot of it. Scattered across ChatGPT sessions and Claude experiments, they’re generating many small wins, though with limited ROI and plenty of policy headaches. But reaching the next level, delivering the promise of enterprise AI transformation, requires deploying agentic AI: systems that can execute complex workflows, make decisions, and operate with some degree of autonomy.
Unfortunately, institutions’ progress toward scaled AI often stalls at this juncture, because neither of the agentic AI deployment options available to them is optimal.
The first option, deploying off-the-shelf AI tools, demands heavy change management as employees learn new interfaces, workflows, and ways of working; people are forced to conform to the technology rather than the other way around; adoption is slow, resistance is high, and the tools may sit unused despite significant investment. The second option, building custom agentic solutions, requires scarce engineering resources, extensive integrations with existing systems, and ongoing maintenance as those systems evolve.
But a third model of enterprise agentic AI adoption is emerging: one in which the AI learns from real workflows, surfaces useful patterns, and builds agents that codify, and improve, what’s already getting the job done. In the short term, such systems — known as behavioral agent automation platforms, or BAAPs — give employees relevant, ready-to-use capabilities. Over time, as BAAPs keep observing and learning, the larger enterprise becomes more efficient — arguably even “smarter.”
In this environment, heavily regulated financial institutions might at first blush feel uniquely disadvantaged, because extra layers of risk management and compliance sit atop any workflow they would seek to agentify. But in the context of the third deployment model, that burden also presents an opportunity: the discipline around clear rules, procedures, and audits provide a vantage from which the institution can observe how work — including all the genAI experiments happening in the shadows — actually gets done.
Self-Assembling Intelligence
“Organizations are hitting what we call ‘the agentic cliff,'” Liminal CEO Steven Walchek said. “They build these complex custom agents that require massive investment, break constantly when systems change, and don’t scale.”
According to Walcheck, whose company enables secure AI deployment in regulated industries, understanding the third option starts with a mindset shift, breaking through the top-down versus bottom-up dichotomy and focusing on a middle-out approach.
Such an approach, one that observes and builds on real-world work patterns and behaviors, might look like this: A business loan salesperson logs into the system for the first time. She immediately sees a set of pre-built automations based on her role including connections to a bank-defined application stack: a meeting prep assistant that pulls relevant context before every call; a pipeline analyzer that surfaces and prioritizes top prospects; an email writer that drafts follow-ups based on call notes.
“In effect, the house is already full of furniture when you arrive,” Walchek said, “With our clients, we’re showing potential time savings right away, often 15 to 20 hours per week just from an initial set of capabilities.”
But the real power emerges over time. As the loan rep uses a BAAP, it learns her patterns: she always requests pipeline analysis before Thursday sales meetings, she frequently asks for competitive intelligence on specific rivals, she needs contract review for deals over a certain size. A conversational analytic tool might slash turnaround time to create a draft term sheet, and then speed up the negotiations that follow. The new AI layer doesn’t need to wait for an individual to flag such applications as opportunities for efficiency; it surfaces and builds them automatically.
More importantly, the system observes patterns across the entire organization. When dozens of people in different departments ask for the same thing, the system recognizes that and consolidates its learning in a single shared utility.
It’s worth noting that when AI observes daily real-life workflows, it also observes — and learns from — negative experiences. Which tasks consume the most time? Where do employees get stuck? What causes applications to bounce back or fail compliance checks? A newly deployed off-the-shelf tool or custom application would have a harder time identifying failure patterns and hardening processes against common errors and friction points.
“We call this compounding intelligence,” Walchek said. “Every query makes the system smarter. Every pattern it identifies becomes a capability it can deploy. The more your people use it, the more it understands how your organization actually operates, and the better it gets at anticipating needs before they’re articulated.”
Why This Matters for Financial Institutions
In many industries, building this kind of comprehensive observability would be a heavy lift. For regulated financial institutions, it might be closer to table stakes.
Banks already must monitor AI usage to ensure compliance and data security. They track what data flows into which systems, which models employees access, and whether sensitive information — PII, PCI, proprietary data — ever reaches external models.
“What we realized is that the same infrastructure that lets you say yes to AI safely also gives you incredible insight into how your organization actually works,” Walchek said. “You’re already watching every query, every data request, every workflow trigger. That visibility becomes the foundation for self-improving automation.”
The idea is to reframe compliance from a constraint to a strategic advantage. And these advantages become difficult for competitors to replicate. A rival bank can buy the same AI models, deploy the same platforms, hire the same consultants. But they can’t duplicate months or years of organizational learning crystallized into custom automations that reflect exactly how your people work and, in effect, embody your value proposition and go-to-market.
Moving Forward
The institutions best positioned to deploy BAAPs, and capitalize on this shift, may not be the ones with the biggest technology budgets. Mid-sized banks are small enough to move quickly but large enough to generate the pattern data that makes self-improving systems work. The same pattern data can also enable them to better articulate and validate ROI.
But capturing such advantages requires a fundamental shift in how leaders and strategists think about AI. Even more than rolling out a new enterprise technology layer, the institution is developing an organizational capability to learn from itself. The question they’re answering is not “What AI tools should we buy?” or even “What agents should we build?” It’s “How do we become the kind of organization that gets smarter over time?”
The banks that answer that question well will develop something durable: the muscle memory of continuous improvement, encoded in systems that observe, learn, and adapt.
