Risk Management, Not Regulation, Should Fuel AI Adoption in Banking
By Zor Gorelov, senior advisor, Klaros Group, LLC
Simple Subscribe
Subscribe Now!
Executive Summary
- Even as Washington eases regulatory pressures, it can’t remove the pressures of marketplace forces and their accompanying risk.
- However, new tools like GenAI and agentic AI, sometimes seen as causes of risk, can also help banks address risks.
- There are many ways of applying these technologies to risk management. Author Zor Gorelov reviews seven types of tools to consider adopting.
Recent shifts in the regulatory climate and diminished regulatory resources seem likely to reduce the pressure for banks to engage in “performative” risk management.
However, the laws of the marketplace remain fully in force.
Banks continue to face risks not only from macroeconomic events but from operational errors. Those can harm or alienate consumers, exposing banks to litigation risks and impairing their ability to grow.
Often people focus on the risks of artificial intelligence. However, there is another side. Today’s generative and emerging agentic AI systems can help regulators and banks alike provide robust oversight by automating compliance tasks, monitoring transactions in real-time for suspicious activity, and continuously analyzing large datasets to detect risks that may otherwise go unnoticed.
These technologies also create the opportunity for banks (and fintechs) to move beyond a “check-the-regulatory-box” approach to a model that enables a more proactive and continuous focus on identifying, managing and mitigating the underlying risks to their organizations — while simplifying regulatory reporting.
Fitting AI to Risk Management and Compliance Roles
To adopt AI for such use cases, financial institutions need to establish AI-specific governance policies that align with their enterprise risk management and compliance frameworks. At the same time, the policies must address the unique risks of GenAI, such as hallucinations and data copyright concerns.
Senior executive oversight is essential to establish clear ownership of AI systems and appoint executive sponsors responsible for strategic accountability and execution. These sponsors should implement regular review cycles to evaluate AI model performance, fairness and, most importantly, the impact of AI on business decision-making.
Rigorous due diligence of AI vendors is also critical to ensure alignment with organizational standards and risk tolerance.
Working with GenAI Is a Balancing Act
Generative AI systems are trained on massive troves of data. To fulfill risk management tasks, they will require access to a wide array of corporate documents and content that often includes sensitive, proprietary and regulated information.
These systems support a growing number of enterprise use cases such as Retrieval-Augmented Generation (RAG) — a way of directing large language models to get answers from a defined body of documentation for answers. Typically this happens within an enterprise — intelligent document summarization, regulatory compliance monitoring, and internal knowledge management.
In all these scenarios, the effectiveness of the AI depends heavily on the quality, accuracy and contextual relevance of the data it can access.
However, with this reliance on internal and often sensitive data comes the increased responsibility to ensure that data is properly managed, governed and protected. It is critical that banks establish robust data and document governance systems to prevent leakage of sensitive information, maintain compliance with data privacy regulations, and ensure data is used ethically and transparently.
These systems should address the following issues, at least:
- Copyright. Ensure the use of proprietary or licensed datasets that comply with intellectual property laws.
- Quality. Enforce data and document validation, deduplication, and cleansing processes.
- Versioning. Maintain clear version histories of documents, data and AI models in use.
- Retention. Apply consistent data retention policies for training, validation and live datasets.
- Tracking. Use audit logs to track data lineage and model decisions.
- Security. Protect training and inference pipelines from tampering.
- Anonymization and Protection of Personally Identifiable Information. Implement advanced data anonymization and masking techniques to safeguard sensitive information and ensure compliance with privacy regulations.
AI and Banking: Use Cases to Consider
Regulation and compliance have inspired dozens of startups and many millions in Silicon Valley VC investment, but figuring out who is doing handling what pieces can be a challenge. The below is an attempt to identify areas of focus for planning.
1. Complaints Management and Oversight. Use natural language processing and sentiment analysis to track customer grievances and identify systemic risk signals.
2. Continuous Compliance Monitoring and Testing. Implement AI solutions that monitor transaction flows and behaviors and document updates in real-time to maintain constant compliance assurance.
3. Asset Tracking. Monitor assets across marketing, contracts, policies and procedures for adherence to internal and external compliance requirements.
4. Call Analysis. Transcribe and analyze call center interactions to detect potential mis-selling, fraud or compliance issues.
Read more:
5. Transaction Screening. Apply machine learning models to flag anomalous transactions and assist with anti-money laundering workflows.
6. Social Monitoring. Use AI to track social media and forums for reputational risk signals and potential market manipulation. It is important to note that while regulatory rollback may reduce formal oversight, it doesn’t eliminate public scrutiny, media attention or litigation risk.
7. Regulation Coverage Monitoring. Automate the scanning and analysis of new rules and regulations with AI-powered tools.
While banks and fintechs need to think about all the use cases listed above, it doesn’t necessarily make sense to try to add AI capabilities for all of them simultaneously. Given the likelihood and severity of risk in these areas, we believe the first three categories are the natural place to start.
The transition from “RegTech” to “RiskTech” marks a significant evolution and opportunity in the use of AI in financial services. With fewer resources, both banks and regulators must leverage AI not only to meet compliance requirements but to proactively identify, manage and mitigate risks. By adopting AI strategically — starting with continuous compliance, regulatory intelligence and consumer protection — financial institutions can stay ahead of threats and strengthen their operational resilience at lower costs. Strong governance frameworks, thoughtful vendor selection, and robust data practices are essential to realizing this vision.
Read more: Winning AI Playbooks from Chase, BofA, NatWest and More
