The Rise Of Machine Learning And The Risks Of AI-Powered Algorithms

Back in the Old Days, you used to have to hire a bunch of mathematicians to crunch numbers if you wanted to extrapolate insights from your data. Not anymore. These days, computers are so smart, they can figure everything out for themselves. But the uncensored power of "self-driving" AI presents financial institutions with a whole new set of regulatory, compliance and privacy challenges.

More and more financial institutions are using algorithms to power their decisions, from detecting fraud and money laundering patterns to product and service recommendations for consumers. For the most part, banks and credit unions have a good handle on how these traditional algorithms function and can mitigate the risks in using them.

But new cognitive technologies and the accessibility of big data have led to a new breed of algorithms. Unlike traditional, static algorithms that were coded by programmers, these algorithms can learn without being explicitly programmed by a human being; they change and evolve based on the data that’s input into the algorithms. In other words, true artificial intelligence.

And this is one area where financial institutions plan on investing heavily. In 2016, almost $8 billion was spent on cognitive systems and artificial intelligence — led by the financial services industry — and that amount will explode to over $47 billion by 2020, a compound annual growth rate of more than 55%, according to IDC.

There are certainly many benefits to using these AI-powered, machine learning algorithms, particularly with respect to marketing strategy. That’s why money is pouring into data sciences. But there are also risks.

Dilip Krishna and Nancy Albinson, Managing Directors with Deloitte’s Risk and Financial Advisory, explain some of these risks and what financial institutions can do to manage through them.

The Financial Brand (TFB): Can you give an example of how financial institutions can use machine learning algorithms?

Dilip Krishna, Managing Director with Deloitte’s Risk and Financial Advisory: One financial institution is using machine learning in the investment space. They are collecting data from multiple news and social media sources and mine that data. As soon as a news event occurs, they use machine learning to predict which stocks will be affected both positively and negatively and then apply those insights in their sales and marketing process.

TFB: With AI and machine learning, algorithms can build themselves. But isn’t this dangerous?

Nancy Albinson, Managing Director with Deloitte’s Risk and Financial Advisory: Certainly the complexity of these AI-powered algorithms and how they are designed increases the risks. Sophisticated technology such as sensors and predictive analytics and the volume of data that is readily available makes the algorithms inherently more complex. What’s more, the design of the algorithms is not as transparent. They can be created “inside the black box”, and this can open the algorithm up to intentional or unintentional biases. If the design is not apparent, monitoring is more difficult.

And as machine learning algorithms become more powerful — and more pervasive — financial institutions will assign more and more responsibility to these algorithms, compounding the risks even further.

TFB: Are regulators aware of the risks AI and machine learning poses to financial institutions?

Dilip Krishna: Governance of these algorithms is not as strong as it needs to be. For example, while rules such as SR11-7 Guidance on Model Risk Management describe how models should be validated, these rules do not cover machine learning algorithms. With predictive models, you build the model, test it, and its done. You don’t test to see if the algorithm changes based on the data you feed it. In machine learning, the algorithms change, evolve and grow; new biases could potentially be added.

We just don’t see regulators talking about the risks of machine learning models, and they really should be paying more attention. For example, in loan decisioning, the data could inform an unconscious bias against minorities that could expose the bank to regulatory scrutiny.

Webinar
REGISTER FOR THIS FREE WEBINAR
How Modern is Your Core? How FIs Can Start Their Digitization Journey
In this webinar, attendees will learn real-world examples of how banks took a phased approach to start their digital journey and the ROI of implementing a modern core.
Thursday, April 11th AT 2:00 PM (ET)
Enter your email address

TFB: Do financial institutions really have the technological expertise to pull this off?

Dilip Krishna: Some of this technology — like deep learning algorithms using neural networks — is on the cutting edge of science. Even advanced technology companies struggle with understanding and explaining how these algorithms work. Neural networks can have thousands of nodes and many layers leading to billions of connections. Determining which connections actually have predictive value is difficult.

At most financial institutions, the number of models to manage is still small enough that they can use ad hoc mechanisms or external parties to test their algorithms. The challenge is that machine learning is embedded in business processes so institutions may not recognize that they need to address not just the models but the business processes as well.

TFB: What should financial institutions consider when developing a risk management program around AI and machine learning algorithms?

Dilip Krishna: Financial institutions need to respect algorithms from a risk perspective, and have functions responsible for addressing the risks. Risk management isn’t necessarily difficult, but it’s definitely different for machine learning algorithms. Rather than studying the actual programming code of the algorithm, you have to pay attention to the outcomes and actual data sets. Financial institutions do this a lot less than they should.

Nancy Albinson: Really understand those algorithms you rely on and that have a high impact or a high risk to your business if something goes awry. I agree that it’s about putting a program in place that monitors not just the design but also the data input. Is there a possibility that someone could manipulate the data along the way to make the results a bit different?

Recognize that risk management of these algorithms is a continuous process and financial institutions need to be proactive. There is a huge competitive advantage to using algorithms and it’s possible to entrust more and more decision-making to these complex algorithms. We’ve seen things go wrong with algorithms so financial institutions need to be ready to manage the risk. Those institutions that are able to manage the risk while leveraging machine learning algorithms will have a competitive advantage in the market.

Calculating Your Algorithmic Risk

Deloitte recommends that financial institutions assess their maturity in managing the risk of machine learning algorithms by asking the following questions:

  • Do you have a good handle on where algorithms are deployed?
  • Have you evaluated the potential impact should these algorithms function improperly?
  • Does senior management understand the need to manage algorithmic risks?
  • Do you have a clearly established governance structure for overseeing the risks emanating from algorithms?
  • Do you have program in place to manage risks? If so, are you continuously enhancing the program over time as technologies and requirements evolve?

This article was originally published on . All content © 2024 by The Financial Brand and may not be reproduced by any means without permission.