Machine Driven Marketing: The Future of Ethical AI and Digital Banking

Before financial marketers can extract real value from artificial intelligence and machine learning, they must address consumers' low satisfaction levels with mobile and online banking personalization, and address unconscious bias in existing data sets. Three specific practices point the way.

Ever since Mary Shelley gave us Frankenstein in 1818, science fiction writers have imagined a world where humankind’s creation turns on its creator. More contemporary cinematic depictions of this theme include “Maximum Overdrive,” “The Terminator” and “I, Robot.” All involve a rise of machines that attempt to destroy humanity — machines that are clearly visible and terrifying.

While this sci-fi existence has yet to materialize, the real world threat is a bit more insidious. The “robots” in our environment are rarely conspicuous, even as they pervade virtually every aspect of our lives. Talking assistants, smartphones, wearables and artificial intelligence (AI) algorithms that guide just about every human pursuit are constantly in our backdrop, serving up “helpful” suggestions.

AI and machine learning (ML) have provided breakthroughs in nearly every aspect of civilization, which is why it’s curious that in a survey from Pew Research Center more than two-thirds of experts on the topic warn that ethical principles focused primarily on the public good will not be employed by most AI systems by 2030.

Digital banking is not immune from this potential reality, which is why it’s important to consider how ethical AI plays a pivotal role in the highest mission financial institutions collectively support: the financial health and wellness of the consumers, businesses and communities that depend on them. Such an AI analysis requires examining the problem from multiple angles, explored below.

( Read More: Banking Needs To Prepare For Marketing’s Data Arms Race )

Consumer Views on Data Sharing

When asked to select their preference between a digital banking app that understands their needs and shares relevant offers versus one that respects their privacy and does not push offers, nearly three quarters (73%) of consumers in an Alkami study gravitated toward the former. Interestingly, Baby Boomers were more likely to prefer this option than their younger counterparts.

While consumers expect relevance, this is largely an unmet need in today’s market. Only 35% of consumers are satisfied with their online or mobile banking app’s track record in providing helpful offers. Less than 40% are satisfied with their financial institution’s understanding of their financial needs, situation or goals.

Disconnect:

Most consumers will share data in return for value, but less than half feel their mobile banking app meets that requirement.

Clearly, AI has the potential to help fill the gap. That said, banks and credit unions should proceed with care. Consumers are wise to the problems inherent with bad data guiding AI models.

In the same study, 64% of consumers preferred for humans to make product recommendations on their behalf, since they are perceived to be more accurate than machines that may be subject to faulty data. One reason could be that the automated suggestions — whether based on AI data or not — may be less trusted and perceived to benefit the financial institution versus a customized solution tailored to the individual’s needs.

Fraud and Anomalies Can Distort Results

We’ve seen flash crashes that partly arise due to machine “glitches” responding to environmental aberrations. These anomalies, when compounded, trigger a chain reaction of faulty responses based on how the algorithm is programmed.

In a similar way, bad actors can poison AI and ML algorithms. Doing so can result in unwanted behaviors that serve the opposite intention of the model. Unlike the paradigm so many sci-fi authors envisioned, this isn’t a case of the machine destroying the human, rather it’s the human contaminating the machine to intentionally inflict harm.

Read More:

Two Challenges for Financial Institutions

There are two potential areas of introspection for financial institutions in this debate.

First, they should assess their own maturity in providing relevant offers to their consumers. The Alkami study reveals that while 74% of regional and community financial institution leaders believe their bank or credit union has at least become a little more accurate in serving consumers more relevant recommendations, less than 30% of consumers agree.

Webinar
REGISTER FOR THIS FREE WEBINAR
Enabling Branch Transformation with Modern Staffing and Execution
As the needs of customers continue to evolve, understanding how to best optimize your staff at each individual location will be paramount to the success of your branches.
Wednesday, September 29th at 2Pm (ET)

Second, financial institutions must consider any unconscious biases inherent in past behaviors that provide the data sets to inform AI and ML models. This is perhaps the trickiest part of getting ethical AI right.

What’s Needed:

AI models require objective oversight to ensure prior data doesn’t perpetuate discrimination.

AI has the potential to reproduce past discrimination, precisely because models are trained based on prior data sets. This isn’t necessarily an indictment of the values or morals of a financial institution — but it requires objective criticism of the decisions leading to the data set to mitigate the potential risk of future unintended consequences.

( Read More: Three Practical Applications for Artificial Intelligence in Payments )

3 Steps to Keep Marketing Use of AI Ethical

While this is a topic fraught with complexity, there are some steps that banks and credit unions can take to navigate largely uncharted waters:

1. Optimize the user experience. Consumers expect financial institutions to understand their financial situation, needs and goals. That said, financial marketers should avoid the temptation to overly gamify the user experience with persuasive technologies that encourage mindless financial behaviors.

While financial wellness is a habit, some applications, such as investing, can run the risk of encouraging counterproductive behaviors. For example, studies have shown that rewarding frequent trading has not been shown to be in the best interest of long-term investors.

2. Interrogate the data. An AI algorithm is only as effective as the data on which it is trained. It’s essential that the data set being used is representative of the outcomes the institution seeks to accomplish. If there is any historical bias in the data, it will materialize in the results. Equipping data scientists to understand the moral obligation of their role is inherent to an ethical AI system.

3. Team humans with machines. Rather than machines overpowering human beings, this is an opportunity for financial institutions to delegate lower-functioning activities to machines, at massive scale; which allows them to reserve more complex or ethical problems to be solved by people. In all cases, regular human inspection of AI models is critical to ensure that the desired outcomes are realized and unintended consequences avoided as much as possible.

While this area will undoubtedly continue to evolve, being objective about AI, including its potential opportunities and pitfalls, provides banks and credit unions with a grounded perspective through which to explore this exciting realm — and hopefully find themselves among those using AI systems focused on the public good.

This article was originally published on . All content © 2021 by The Financial Brand and may not be reproduced by any means without permission.