The advent of applications like ChatGPT and Microsoft Copilot has brought generative AI to the mainstream — making it more accessible and user-friendly for everyday tasks. These tools are revolutionizing how individuals and businesses interact with AI, offering sophisticated capabilities that were once confined to specialized systems and experts.
For example, a recent McKinsey report on the economic potential of generative AI found that tools powered by this technology can provide immediate and personalized responses to complex customer inquiries. By improving the quality and effectiveness of interactions through automated channels, generative AI can handle a higher percentage of customer inquiries. This allows customer care teams to focus on more complex issues that require human intervention.
The trend of generative AI dramatically improving the customer experience by providing immediate and personalized responses can also apply to financial institutions. Generative AI dramatically improves the customer experience and creates new opportunities to support customers that align with consumer expectations.
However, it’s imperative that highly regulated industries — like financial services, healthcare, energy, and transportation — recognize that not all generative AI is created equal. These applications can often be a short-term solution to a long-term issue. Banks and credit unions require industry-specific solutions built for their unique needs.
An AI tool that pulls in millions of data points across various industries won’t do the trick — and can result in serious data issues down the road.
How to Turn Customer Understanding Into a Competitive Advantage
Join Nymbus CEO Jeffery Kendall and Nick Kennedy, author of The Good Entrepreneur, for the strategies your bank needs to win deposits and drive growth in 2025 and beyond.
Read More about How to Turn Customer Understanding Into a Competitive Advantage
How eSignature workflows can win over the next generation
Listen and learn how Denison State Bank has adapted their strategies to meet the evolving needs of today’s consumers in this 15-minute interview.
Read More about How eSignature workflows can win over the next generation
Short-term Gains Could Mean Long-term Risks
Financial institutions may find quick wins by developing their own AI solutions in-house, or by leveraging existing AI tools from Google, Microsoft, etc. to develop customer-facing tools. But this short-term solution won’t solve any problems in the long term. For example:
High cost: Fine-tuning and maintaining an AI model is costly and time-consuming. It requires ongoing maintenance and adjustment to ensure accuracy and a positive customer experience. Guardrails must be put in place to prevent certain information from being utilized to form outputs.
Hallucination risk: When building on a generic Large Language Model (LLM), the risk of hallucinations is high, as the tool has access to data that may not be relevant to the industry. This leads to confusion and inaccurate or irrelevant outputs.
Bias: General LLMs are known to contain biases that could be detrimental to a bank if they get through the guardrails.
Data risks: Uploading proprietary information could result in potential data leakage issues that could put the institution at risk.
Perhaps the biggest challenge for financial services in leveraging AI will be “Shadow AI.” Like its predecessor, Shadow IT, employees already use their generic AI tool to answer questions, create documents, and serve customers. Unfortunately, these tools are not built for financial services alone and require significant fine-tuning and oversight to ensure that accurate information is being delivered.
Dig deeper:
- How Banks Can Leverage AI at the Contact Center
- Mastering AI-Powered Personalization for Long-Term Growth
- How BofA Is Driving to be ‘Local’ in More Markets
The Power of Purpose-Built AI
The promise of generative AI is that it will be smart enough to allow humans to focus on human priorities. If we need to put a tremendous amount of effort into an existing LLM to make it understand our business, that effort will take a significant amount of time and money.
But when generative AI is wholly secure and informed by how specific industries operate, it provides the greatest value to the business at hand. Also, LLMs informed by a knowledge graph with operational context stand to improve productivity, efficiency and employee confidence.
Financial institutions should therefore align their AI strategy with their business goals to ensure that AI implementation delivers tangible value and competitive advantage. This will enable them to use AI successfully and responsibly.
Custom AI for True Innovation
Integrating generative AI into financial services marks a significant advancement, particularly in enhancing customer experiences and operational efficiencies. However, financial institutions must recognize that generic AI solutions aren’t capable of meeting their unique needs.
Banks and credit unions must carefully consider their approach here. They must strive to balance innovation with security and strategic alignment to ensure regulatory compliance. While institutions may view generic LLMs as the solution to short-term challenges, they need to consider the long-term issues at hand: namely, protecting customer data and maintaining compliance with regulatory requirements.
The availability of these LLMs has created an opportunity never seen before. They will become — and are becoming — the backbone of a new wave of innovation. They are the building blocks and the foundation for what will become the promise of generative AI. But they are just a starting point. They require work and strategy to be effective.
In a world of “Haves” and “Have Nots,” financial institutions that can’t afford the level of expertise to optimize general LLMs can’t default to using the models as-is. Unfortunately, this is what many are doing today because they haven’t yet defined the use cases. The default is to create a policy that instructs employees on what an acceptable use of generative AI is for their institution.
However, a policy is just a piece of paper. This is where “Shadow AI” becomes real.
The next course of action is for banks to outright block access to generic AI tools, because it’s safer than exposing their data to the unknown. This means the “Have Nots” run the risk of being left behind again. Another technological revolution that favors the well-resourced.
Tools built to meet everyone’s needs typically meet those of no one. They never go deep enough, they’re never precise enough, and they do just enough to get by. Purpose-built AI is the solution.