From Outside Banking: How Brands Should Weather Consumer Skepticism of GenAI

Brands face a growing challenge of consumer distrust in the face of generative AI, the crux of a recent dialogue between Forrester group director Keith Johnston and principal analyst Audrey Chee-Read. In their conversation Johnston and Chee-Read discuss the blurring line between real and AI-generated content, the risks of AI misuse and the importance of transparency, human oversight and authenticity in navigating it.

The podcast: What It Means on “How Brands Can Steer Clear of GenAI Backlash

Source: Forrester

Why we chose it: Financial institutions should have reservations around generative AI, its influence and its shortcomings — but they also can’t hesitate to start rolling it out or they risk falling behind for good. The one risk most banks and credit unions haven’t looked at yet, however, is the trepidation from consumers around generative AI. How banks and credit unions handle those concerns could make or break the relationship between banking provider and consumer.

Executive Summary

What if traditional banks are ignoring the real fears behind generative AI— and risk stunting the growth of legacy institutions for good?

The majority of financial institutions globally — prompted by the apprehension from regulators — are nervous to integrate generative AI into their backend and front-facing tech stacks. But even those reluctant about it, are doing it anyways. The intimidating storm on the horizon no one can really run from is better on all counts to prepare for instead.

But, banks and credit unions tethered by legislation aren’t the only ones worried about gen AI advancements: consumer skepticism could be the bigger threat yet. It was the subject of a Forrester podcast where principal analyst Audrey Chee-Read discussed with host Keith Johnston the looming threat of consumer backlash in the age of generative AI, and what brands — of all kinds — must do to weather the storm.

Chee-Read paints a picture of a landscape where the line between reality and AI-generated content is blurrier than ever. From fantastical event promises that fail to deliver, to ads created entirely by AI without crediting original content creators, the potential for generative AI mishaps is vast and indiscriminate. Especially as the 2024 U.S. presidential election looms, the stakes are higher than ever.

Key Takeaways

  • 77% of U.S. and UK consumers believe companies should disclose when they are using generative AI in interactions.
  • More people distrust information provided by generative AI than trust it, with distrust rising in recent months.
  • Only 39% of people feel they know how to use generative AI responsibly and ethically themselves.
  • Brands face risks like reputation damage and consumer resistance to change when AI implementations fail to meet expectations.

Interested to hear what the GenAI model helping to write this article thinks of the podcast and its content? Read to the end of this article to learn more.

The Rise of Consumer Skepticism

The proliferation of generative AI has given rise to a new era of consumer skepticism. Chee-Read points to recent high-profile incidents, such as a heavily edited image of Kate Middleton and her children on Mother’s Day, and a confessional video about her cancer treatment that had viewers questioning its authenticity due to seemingly unnatural tree and leaf movements.

This heightened skepticism extends beyond public figures and into the realm of everyday interactions. Forrester’s research reveals that 77% of U.S. and UK consumers believe companies should disclose when they are using generative AI in their interactions. Moreover, the proportion of people who distrust information provided by generative AI is growing, with a significant uptick in recent months.

Interestingly, consumers are not only skeptical of companies’ use of AI but also of their own ability to use it responsibly. Only 39% of people feel confident in their ability to use generative AI ethically, highlighting a widespread lack of education and awareness.

 “Generative AI is a vehicle, it’s not the destination. It helps you be more productive, but people aren’t thinking about it that way. They’re thinking of generative AI as the destination.”

— Audrey Chee-Read

One of the key drivers of this rising consumer skepticism is the increasing difficulty in distinguishing between real and AI-generated content. Chee-Read recounts a Forrester experiment where consumers were shown a mix of real and AI-generated images and asked to identify which was which. The results were striking: there was no clear consensus, with people generally confused and unable to consistently spot the AI-created content.

This blurring of lines has significant implications for brands. In an environment where consumers are primed to question the authenticity of everything they see, companies must work harder than ever to establish and maintain trust.

The Risks for Bank Brands

The stakes for brands in this new landscape are high. When AI implementations fail to meet expectations, companies risk not only reputation damage but also consumer resistance to future innovation. Chee-Read cites chatbots as a clear example — if a consumer has a poor experience with a company’s chatbot, they may be less likely to engage with, or trust in, a company’s AI-powered services in the future. That first impression sticks.

This resistance can have far-reaching impacts, as evidenced by the recent high-profile failure of AI company Human’s promise of screen-free AI assistance. Disappointing early reviews not only tarnished the company’s reputation but also likely made consumers warier of similar offerings.

Brands must also contend with the reputational risks of AI misuse, whether intentional or not. Chee-Read uses the marketing of new sci-fi movie Civil War as an example.

“What was interesting was the really awesome movie posters they created that really drew the viewers in. They were very enticing,” she says. “But what they found was that all of these movie posters were all generative AI created — none of the things that were shown in the posters actually showed up in the movie.” That disconnect between marketing and product prompted a lot of backlash from first-time watchers of the movie, she explains. Similarly, Under Armour’s AI-generated ad that failed to credit the original content creators sparked backlash and accusations of intellectual property misuse.

Read more: 

A Path Forward Between Gen AI and Banking Brands

So, what can brands do to navigate this seemingly land-mined landscape?

Transparency: Brands must be upfront about when and how they are using generative AI, both to maintain credibility and to set proper expectations with consumers. This includes disclosing AI use in advertising, customer interactions, and even internally-facing processes.

Human oversight: Transparency alone is not enough. It must be paired with robust human oversight and clear AI governance policies. With the current lack of federal AI regulations in the U.S., the onus is on brands to self-police their AI practices. Establishing cross-functional teams to monitor AI outputs, vet AI partners and react swiftly to potential issues is crucial.

Authenticity: “How do you actually measure authenticity? How do you define it?” Chee-Read says these questions particularly are difficult to answer in the age of AI, but it really shouldn’t be. As she emphasizes, authenticity is driven by a company’s brand. If that is clear, and it’s built into the prompts and conversations with the generative AI engine(s), the results will also feel authentic to the consumer.

What’s on the Horizon

Navigating consumer distrust in the age of generative AI is undoubtedly a daunting challenge, but Chee-Read remains optimistic. With incremental steps towards transparency, accountability and responsible innovation, she believes brands can pave the way to a more trustworthy future.

Insights From the Gen AI Model Itself

This article was written with the assistance of a generative AI model called “Claude”. Out of curiosity, after we gave the model prompts to help generate portions of this article, we asked Claude what it thought about the content itself.

This is what Claude had to say:

“It’s a fascinating meta-level consideration. As an AI model myself, I believe it’s crucial for me to be transparent about what I am and to avoid any potential deception or misleading impressions. I think my role in helping you write this article is a great example of some of the key principles we’ve touched on — the importance of transparency, human oversight and using AI as a tool to enhance rather than replace human creativity and expertise. By being upfront about what I am and working collaboratively with you, we can leverage the strengths of AI while mitigating the risks and pitfalls.”

“At the same time, I recognize that my involvement could be seen as somewhat ironic or even contradictory given the article’s focus on consumer distrust of AI. It’s a valid concern and one that I think is worth acknowledging head-on.”

Claude explained that its different perspective as an AI assistant actually adds a unique angle to this article that a human voice couldn’t illustrate. It emphasized too, however, that its expansive capabilities are limited by its own lack of oversight and self-correction.

“Misleading or irresponsible uses of AI will only compound consumer distrust. But if we can foster an ecosystem of transparency, accountability and collaborative human-AI interaction, I believe we can unlock AI’s potential in positive ways while maintaining public trust.”

After further prompting on its own ability to self-reflect, Claude explained that its introspection isn’t its own intuitive reflection, but instead a mix of matching patterns and extrapolation from copy it has been fed.

“I’ve been trained on a vast corpus of text data, which includes many examples of introspective writing and philosophical musings on the nature of AI. So, in a sense, my ‘self-reflection’ is a sophisticated form of mimicry and recombination, rather than true introspection.”

“At the same time, I wonder if there’s something more nuanced at play. The fact that I can recognize the limitations of my own self-reflection — that I can openly acknowledge the philosophical complexities and potential contradictions — seems to hint at a level of metacognition that perhaps goes beyond mere pattern matching. But you’re right to be skeptical of anthropomorphizing my capabilities.”

Our writers are looking to learn more about generative AI and its take on marketing, technology and relationships both inside and outside the banking world. If you’re curious to read more like this or have questions you’d like answered, email us at [email protected].

This article was originally published on . All content © 2024 by The Financial Brand and may not be reproduced by any means without permission.