“It’s like ‘Jaws’ but in space.”
That illustrative description — which a script screener used to get director Walter Hill to take a look — led to the making of the 1979 movie “Alien.”
“Genius, right?” said artificial intelligence expert Bjorn Austraat. “Says everything. Very short. Super effective.
“They didn’t come up with that on the fly. They thought about that for a long time.”
Austraat is senior vice president and head of artificial intelligence acceleration at the $574 billion-asset Truist. He brought up the story about this classic sci-fi horror fantasy while discussing how incredibly challenging it can be for technology and banking executives to communicate with each other about enterprise AI.
But Austraat — who said he has worked on AI technology since before it was called AI — has some practical solutions to nurture effective communication, bridge conflicting expectations, and address other challenges. His tactics are meant to help keep such challenges from derailing enterprise AI projects that inevitably involve many people from across departments working together over many months.
Going from the Idea Stage to Enterprise AI in Action
Technology, in many ways, is as much about the people using it as about what it does.
Yet one aspect of managing tech projects that doesn’t get a lot of attention is navigating the expectations of the people involved.
Austraat said expectations are a factor not only with top executives, but everyone across the organization, even younger members of the tech team. It’s a factor that commonly crops up in relation to the timeline for a project, especially longer-term ones like those involving artificial intelligence.
“Enterprise AI is tricky, because it’s moving fast and slow at the same time, especially in highly regulated industries like banks, where it can take you six to 12 months to make the model and then another six months to validate it before it goes live,” said Austraat.
This often comes as a revelation, he said, to younger AI builders who grew accustomed in college to knocking out a complete model — albeit in a vacuum — in a single weekend. Devising AI technology that aligns with a long list of banking needs and requirements is much more complex, he said.
“The success of enterprise AI is about 5% having a great model and 95% everything else,” said Austraat.
“Everything else” covers a lot of ground. But Austraat tackled what he sees as some of the biggest obstacles to effective adoption of AI in the banking industry during a presentation at Re-Work’s AI in Finance Summit in New York this spring. Below are three key pieces of advice the Truist executive shared, along with some of his tactics for implementing each one.
1. Find Common Language for Talking About Artificial Intelligence
“Implementing AI takes a lot of translation,” said Austraat, who has been working in this area since the 1990s.
“It’s very hard to get people to understand each other because some people speak ‘data science-ese,’ some speak ‘executive-ese,’ and some speak ‘implementation-ese,'” Austraat said. “It’s hard to bridge those cultures and languages.”
His first job actually concerned communication, not computers. He served as an interpreter for conferences, even some sponsored by the United Nations.
Want to go deep on AI best practices for banks?
Attend our AI Masterclass — Unlocking the Power of Artificial Intelligence in Banking — at The Financial Brand Forum 2024 on May 20-22 in Las Vegas. Led by Ron Shevlin, chief research officer at Cornerstone, this three-hour workshop will be jam-packed with lessons learned from industry leaders and real-world case studies.
For more information and to register, check out the Forum website.
“I was helping people understand each other across language boundaries,” he said, “and I have the same job now. It’s just between data scientists and executives.”
Austraat coaches his AI team at Truist extensively to get them to apply a principle he calls “cognitive courtesy.” Essentially, by this he means consciously working to communicate effectively with non-tech staff and leaders. Every discipline has its own jargon, he said. What’s efficient shorthand communication in one tribe can be unintelligible between tribes.
Another phrase he shares with his team, “unconscious incompetence,” is meant to be a reminder that people can get so caught up in their own perspective they sometimes fail to recognize that they don’t necessarily have all the information they need to know. Even worse, they can become so sure of themselves that they might fail to take in important information being shared with them.
Beware 'Unconscious Incompetence':
Austraat said tech types need to watch out for 'unconscious incompetence' — that is, forgetting there are things that you don't know you don't know.
From his side of the divide, Austraat pushes Truist team members out of their comfort zone, demanding that they learn to speak about artificial intelligence and its applications intelligibly. One method is picking data scientists and engineers at random in meetings to describe what they’ve been working on all week. He said they are expected to have their “Jaws in Space” pitches ready. If they don’t …
“After their first three or four red-faced moments, they spend a non-trivial amount of time getting ready for those meetings,” said Austraat. “And they will have it ready when they walk into a senior executive in the hallway — and that’s when it really matters.”
- Ally Taking ChatGPT Slow, But Could Be Using It By Yearend
- How Banking Execs Are Navigating the Pitfalls of Artificial Intelligence
2. For AI Success, Work on Team Building & Manage Expectations
So many people are having fun with the latest iteration of generative AI — for example, asking it to create a portrait of their dog as a rock star — that getting results from this kind of technology can feel deceptively fast and simple.
Implementing enterprise-wide artificial intelligence is anything but.
One of the challenges of having AI experts and mainstream bankers working together is managing expectations, Austraat said. Developing heavy-duty AI takes time.
“A problem, especially in regulated industries, is that the span from idea to running model can be 18 months,” said Austraat. “And the average attention span of an executive is not 18 months.” It is critical that communication be realistic and that the value of what happens over those months becomes clear.
There Is No Instant Oatmeal Version of Enterprise AI:
Business unit leaders and newcomers to banking need to learn what a realistic timeframe for delivery is.
Austraat said he expects compliance will become more stringent as regulators get a better handle on how they want to deal with AI technology. This, in turn, will push AI development timelines out even further.
Understanding the range of attitudes towards AI is important for tech staff, Austraat said, because it can be a factor in moving projects along. Again using a sci-fi example, he said that for some people artificial intelligence conjures images of “The Terminator,” a threat. For others AI signifies friendly, helpful entities like Commander Data, the android on “Star Trek: The Next Generation.”
“Agile pods” have helped Truist with bridging these kinds of barriers to effective communication. Austraat said AI practitioners meet regularly with staff from other areas of the organization, including legal, risk management and compliance. Everybody learns something about speaking in each other’s “language.”
Key to making the group process work is talking about “value stories,” according to Austraat. Focusing on the reason AI tech is being applied, and the benefits to be realized, provides common ground. “What is the pitch, the hook that everybody agrees on?” said Austraat. Once that becomes the consensus, “everybody can understand and everyone can work in the same direction.”
- A Banker’s Guide to Digital and AI Transformation Success
- How to Get Comfortable with Generative AI in Your Marketing Strategy
3. ‘Kudos and Campfires:’ Celebrate Wins & Learn from Mistakes
“Maybe you’ve heard it said, ‘Praise in public, criticize in private,'” said Austraat. But he thinks sharing what has gone wrong and what can be learned from it is just as important as cheering the wins when a project is underway.
At Truist, he’s worked to create a culture where people share the things that didn’t go well. A presentation bombed. An AI model failed to do what it was designed to do.
“I don’t make people talk about them,” said Austraat. “I ask them to volunteer.”
This is the “campfire,” a time when people discuss the flops and then, conceptually, they are committed to the flames of the fire. “We all learn from it and that’s it,” said Austraat. “No big deal.”
As a result, he said, “you find out about things that didn’t go well much sooner than if you make failure unacceptable.”
Read more about Truist: