The Hidden Risks of Artificial Intelligence in Bank Marketing

Think your bank isn't using artificial intelligence or machine learning? Think again. Some of the most common software programs financial marketers rely on are built around AI. And that's just the beginning of the places where weak spots can develop.

When most bankers think of artificial intelligence and machine learning, they likely think of underwriting models and chatbots. However, the potential uses of these technologies keep growing.

Those in the banking industry often don’t know how such technology is already in use in their own organizations. For example, marketing and business development teams may not recognize the artificial intelligence and machine learning elements driving many of their own tools, and likely do not fully understand the risks the technology could pose to the organization.

Compliance considerations may not be consistently included in key decisions and processes that could have a big impact on the organization’s risk profile as artificial intelligence and machine learning proliferate. Financial institutions of all shapes and sizes must understand exactly where AI and ML are being deployed, throughout the entire life cycle of their products and services. Then steps must be taken to mitigate that risk.

Following are a few examples of where artificial intelligence and machine learning are hiding in plain sight.

Where Artificial Intelligence and Machine Learning Lurk in Banking

Artificial intelligence is a catch-all term that generally refers to technologies capable of analyzing data and identifying patterns to make a decision. Machine learning, on the other hand, allows systems to learn and improve as new information is made available without specific programming instructions.

In financial services, artificial intelligence uses advanced prediction technology based on machine learning techniques and it’s leveraged in a variety of use cases. The most obvious are the chatbots and virtual assistants offered by fintechs, credit unions and banks, but AI/ML is also used in designing the customer experience, marketing and customer service to name a few. Some examples:

Marketing: Artificial intelligence and machine learning are being used for targeted marketing so customers receive product recommendations and special pricing based on their personalized data and search history. Marketing teams or agencies may be using tools like “look-alike models” to find common attributes among their customer bases and other users of similar products and services or propensity models to determine what characteristics make a consumer more likely to convert on an offer.

Customer service: Today’s customers demand service that is highly personalized, easy to navigate, and effective in terms of problem solving, which is why AI is the new customer service agent. AI is being leveraged to automate tasks, make predictions, and get users quickly to the best source of help. This appears in chatbots, call routing, workflow and pattern recognition.

Customer experience: People expect the institutions they do business with to know more about them and to make their experiences faster, easier and more customized. Artificial intelligence and machine learning allow faster processing of large amounts of structured and unstructured data from multiple sources, and identify new and expanded options for personalization in customer interactions. This enables institutions to engage with customers in the mediums they prefer and increases customer connectivity, enhancing the customer’s overall experience.

Read more: Consumers Expect Personalization at Every Banking Touchpoint

Webinar
REGISTER FOR THIS FREE WEBINAR
Unlocking Digital Acquisition: A Bank's Journey to Become Digital-First
This webinar covers a comprehensive roadmap for digital marketing success, from building foundational capabilities and structures and forging strategic partnerships, to assembling the right team and more.
Wednesday, May 1st at 2pm EST
Enter your email address

It’s Critical to Know Where Your Bank is Using AI Tech

You can’t manage risk that you don’t know exists. That’s why it’s critical to identify all of the use cases in your institution for artificial intelligence and engage with risk and compliance professionals as early as possible. Turning a blind eye may change the risk profile of your institution:

Regulatory risk: Several agencies, including the Consumer Financial Protection Bureau, have been evaluating how existing laws, regulations and guidance should be updated to reflect the increased use of artificial intelligence and machine learning in consumer finance.

In a January 2022 blog post CFPB drew a line in the sand.

“We plan to more closely monitor the use of algorithmic decision tools, given that they are often ‘black boxes’ with little transparency. Institutions will face consequences for this type of robo-discrimination.”

— CFPB in an official blog

CFPB‘s recent update to the UDAAP examination manual to include discrimination in non-lending products also points to increased scrutiny of AI/ML. Given CFPB’s current active enforcement stance, it’s safe to say regulators will be looking more closely at all uses of AI/ML in your institution.

Reputational risk: A public enforcement action could wreak significant reputational damage to your bank, especially if the allegations involve possible discrimination in the use of artificial intelligence and machine learning.

Operational risk: When an automated process breaks, it almost always causes downstream impact. If your chatbot breaks, are you prepared for more contacts in your call center? If you aren’t prepared, this may result in longer wait times. Not a good customer experience.

Read more:

What to Do When You Find AI and ML in Marketing and Elsewhere

While AI/ML models are clearly beneficial for banking, they also have the potential to increase risk, especially in light of increased regulatory scrutiny and focus on fair lending and discrimination at CFPB and other state and federal regulators. Since the technology is trained with historical data that may unknowingly and unintentionally reflect discriminatory patterns or biases, the outputs can perpetuate those same problems. This is precisely why model transparency is critical with any use of artificial intelligence and machine learning.

Inventory: Start by identifying all the areas where your institution relies on artificial intelligence and machine learning. This might include models built in-house, but may also be core to functions provided by third parties at any point in your product life cycle.

Vendor Management: You are responsible for the models used by your third-party service providers. Be sure model risk management is part of your initial and ongoing due diligence.

Set up guardrails: Once the artificial intelligence and machine learning uses are identified in your banking institution, establish a set of guardrails to regulate its use. Leadership should align and clearly communicate what uses are OK, what controls should be in place to monitor those uses, and if there are any bright lines your institution is not willing to cross with the two technologies. Be sure the compliance function has an early seat at the table when considering new tools, products or partnerships that may involve artificial intelligence and machine learning.

Model risk management: Develop and document a review and approval process for any new models or changes to existing models. For existing models, conduct periodic testing of outcomes to assess for possible discriminatory model outputs. Finally, when models are updated, there should be validation to ensure the intended outcomes were achieved and that there were no unintended consequences.

At Klaros Group, Lauren Sartwell is a director, Stephanie White Booker is a partner, and Poorani Jeyasekar is a director.

This article was originally published on . All content © 2024 by The Financial Brand and may not be reproduced by any means without permission.