An Inside Look at Ally Bank’s Measured Roll-Out of GenAI

Q&A: As financial institutions race to implement AI across their organizations, public anxiety and government scrutiny are intensifying. Can banks deploy and leverage AI to power growth while avoiding public backlash? Ally Bank's approach – simultaneously trailblazing and prudent – is one promising model.

In a wide-ranging discussion with Digital Banking Report founder Jim Marous, Ally chief information officer Sathish Muthukrishnan and Microsoft’s managing director of data and artificial intelligence Priya Gore unpack lessons from their AI partnership and spotlight the opportunities and principles guiding Ally’s adoption blueprint, applicable to any financial institution eyeing productive AI integration that puts people first.

Q: In collaboration with Microsoft, Ally built a bespoke internal AI platform called “” to explore AI and machine learning techniques in November 2023. What prompted moving so quickly?

Sathish Muthukrishnan: AI has vast latent potential, but most models require extensive guidance to prevent harm. As a digitally native organization, Ally’s culture embraces managed risk-taking, and that fueled the quick mobilization of a joint team with Microsoft to explore possibilities safely yet creatively.

We both wanted to learn together — collaboratively forming an initial ethical AI framework spanning from security to explainability to accessibility. Doing so required almost daily policy adjustment to uphold safety for consumers and employees alike, with transparency.

Q: Sathish, you have noted that financial services require a tailored framework different from that of other industries for AI oversight. Why?

Muthukrishnan: Every industry needs guardrails guiding technology deployment, of course, but finance presents distinctly multidimensional risk profiles because of the direct linkage with consumers’ financial livelihood. Miscalibrated AI can jeopardize both profitability and economic inclusion if flawed assumptions or training data inadvertently cause undesirable segmentation, for example.

So, financial applications demand especially rigorous scrutiny and continuous monitoring beyond just technical bugs. Quickly identifying and mitigating unintended discrimination is vital. Our ethics council — which combines internal leaders, outside advisory and legal — provides independent oversight, flagging possible issues like unfair bias perpetuation before models ever touch customers.

Identifying the Early Wins

Q: Where did you see some early promising results from controlled AI testing? What surprised you about experiments so far?

Muthukrishnan: First was the sheer breadth of staff eager to explore potential AI applications across departments; we have over 200 crowdsourced ideas now. People instinctively recognize that mundane workflows are capable of enhancement, liberating more human capacity for judgment-oriented tasks.

For example, early successes centered on customer support call summarization. Applying natural language processing to accurately transcribe conversations immediately after completion saves precious minutes of manual translation. But more importantly, advisors obtained fuller context around client needs from’s enhanced summaries. This quickly sparked explorations of personalization capabilities based on that improved understanding.

It turns out that the greatest returns are generated by deploying AI to better utilize our specialized human talents, and transparently conveying tradeoffs so teams feel empowered by AI instead of threatened. Our commitment is to elevate human potential by thoughtfully leveraging algorithms, not replacing people outright in dehumanizing ways.

To mitigate fear around adding AI to your workforce, and empower employees to feel comfortable with AI, communicate the tradeoffs clearly.

Priya Gore: At Microsoft, our product teams explore AI applications that restore helpfulness to technology, assisting individuals in completing tasks rather than complicating them.

We believe assistive interfaces fundamentally uplift consumer experiences. Training AI in writing helpful replies to customer questions, or summarizing agreements to identify key sections, save enterprise clients thousands of hours that can be better invested in tasks requiring higher levels of judgment.

Financial advisors’ AI-augmented explanations of mortgage options or portfolio scenarios earn trust through openness.

Read more:

Q: Given the breakneck pace of innovation, how do you decide when specific generative AI capabilities are developed enough for testing? What criteria signal readiness to expand beyond early limited availability previews?

Gore: Given that societal stakes remain so high, baseline thresholds span technical accuracy, security protections, transparency, and training data oversight. And requirements continue ratcheting higher.

Generative writing is a good case study in balancing the tension of powerful capability with trust and responsibility. While by no means fully mature today, gradual exposure now to structured experimentation gives operational experience in identifying areas for improvement jointly with creators before scaled usage.

The Ongoing Balancing Act

Q: What risks or pitfalls in AI exploration currently require the greatest vigilance as financial applications accelerate? What principles are guiding Ally’s governance presently?

Muthukrishnan: First, generative AI still lacks reliable memory for full-context personalization, so experiences demand continuous human guidance tailored to individuals. We closely track model behavior to rapidly catch distortion risks and update programming.

Additionally, given innate financial sensitivity, applications touching financial services face the highest transparency requirements on data-powered experiences. Consumers must feel empowered seeing AI’s capabilities and limitations clearly, in order to build their durable trust — not stonewalled by opacity and inscrutability. And our development mandate requires avoiding efficiency gains that worsen inequality or dehumanization.

That is why we vet all evaluation ideas through a specialized ethics council encompassing external advisors examining factors like unconscious bias perpetuation and other societal risks.

Dig deeper: Truist & IBM Explore What Quantum Computing Can Do for Banks

Q: Sathish, you noted finance differs from other industries, building training models on actual consumer data risks significant harmful personal consequences downstream. What safeguards specifically help mitigate the chances of individual data contaminating external systems?

Muthukrishnan: That was an immense concern from the start, and we treat it with enormous care. We built with stringent protocols governing data flows. Confidential information stays resident in secure internal environments and never transfers externally, except for limited examples authorized under strict policies. Multi-layered controls encompass encryption, access limitations, and verification procedures combating external exposure.

Additionally, queries processed through Azure OpenAI undergo exhaustive scrubbing, removing any personally identifiable linkages irreversibly before final submission. We contribute code developed back to open-source community resources to uphold uniform protections universally for maximum impact, removing institutional barriers limiting financial inclusion.

The Talent Question: Hire or Train?

Q: Priya, part of ethical governance relies on financial institutions attracting specialized technical talent overseeing models. However, competition continues to intensify for data scientists and other skilled AI roles. What emerging best practices facilitate better team composition and skill building?

Gore: Certainly, recruiting and continuously developing AI talent is increasingly vital to stay competitive. But rather than focus on the risk of smaller players losing out, we view this as an inflection point as capabilities are actually democratized and cascade across organizations through partners.

Firms are launching AI centers of excellence, mandating cross-training rotations, earmarking professional development budgets, and standardizing organizational tools’ smooth adoption.

Beyond base technical chops, creative confidence and empathy are paramount in navigating uncertainty responsibly. We coach teammates on discovering problems collaboratively before solutions.

Q: Priya, what suggestions would you offer finance CEOs for identifying specialists equipped to assist their AI-focused modernization?

Gore: Ask for proven experience launching and then tuning initial applications with comparable peers at scale. Does the partner showcase staff with banking technology deployment expertise alongside AI advisors? Can they illustrate how proposed tools concretely uplift goals rather than vanity metrics?

Justin Estes is an award-winning writer, strategist, and financial marketing expert with expertise in banking, investments, and fintech. His clients include the NYSE, Franklin Templeton, Credit Karma, Citi and, UBS, and his work has appeared in Forbes, Barrons and ThinkAdvisor as well as The Financial Brand.

This article was originally published on . All content © 2024 by The Financial Brand and may not be reproduced by any means without permission.