Exploring GenAI Options Beyond OpenAI’s ChatGPT

Banks that want to investigate generative artificial intelligence tools for marketing and other tasks aren't limited to products from OpenAI and its ChatGPT family. Here's a sampling of what else is out there.

Mark Rober has made a name for himself on YouTube as one of a number of engineering types who turns physics and other science into fun and games. You may not remember his name, but if you remember the guy who foiled porch delivery thieves with glitter bomb boxes, who set up sophisticated squirrel obstacle courses with a major dump of walnuts for the winners, and who frequently creates science fair projects on YouTube on a massive scale, that’s Rober.

So he was a natural to tap when Google wanted to popularize its latest generative artificial intelligence release, Gemini 1.0, from within its Google Bard AI service in December 2023. In a video that Rober filmed for Google on his YouTube channel, he asked Bard for ideas for a video about trying out the new GenAI. Among the results: Use the technology to design a perfect paper plane. Along the way came Bard’s further recommendations that it be a giant paper plane that could be launched and flown through a ring of fire.

Not something your bank or credit union is likely to try, but Robert made it fun to see the process at work. Next step for Rober was asking the technology to give him an outline for the video — based on the typical structure of a Mark Rober video. Then he consulted with Bard on design. Out of three recommendations, and predictions regarding performance, Rober’s test with scale models confirmed what the tech had predicted: The best design for accuracy in passing through the ring of fire would be one emulating the lines of the old Concorde supersonic transport.

“That is a fine example of ‘aerogami,'” said Rober after a successful test flight, “and, yes, Bard taught me that word.” (It refers to the folding of paper to produce airplanes.)

Bard also suggested that accuracy could be improved by designing a much larger craft, making the folds sharper, and manufacturing the “paper” airplane from stiff foam core board. After some rough trials, Rober and Bard worked out the kinks and the six-foot-long plane sailed through the ring, unscorched.

“Normally it takes a year for me to go from an idea to a final build like this,” said Rober at the video’s conclusion. “This only took three weeks. Massive time saver, every step of the process.”

Given Rober’s example, it’s not hard to see bank personnel asking Bard/Gemini for suggestions on new ideas for annual report presentation, mobile banking button designs, maybe even floor plans for next-generation branches.

What was missing from the above? Not once did OpenAI or ChatGPT get mentioned. There’s much more going on that bank and credit union executives interested in GenAI techniques should explore, beyond just the much-publicized OpenAI family.

Read more: Cheat Sheet: How to Mitigate Risk and Consumer Fear As Banks Adopt GenAI

Understanding GenAI Tools for Today and Tomorrow

Google included the video in a blog announcing the latest iteration on GenAI from its labs. (It’s being rolled out in two stages, the first in December and then in early 2024.) In this and other blogs and technical papers, Google illustrates how GenAI is moving beyond just text and “writing” to tasks that are multidimensional in nature. Google’s commentary describes how it has developed the Gemini family of GenAI for “multimodality” —to be able to “seamlessly understand, operate across and combine different types of information, including text, code, audio, image and video.”

Bard hasn’t been on the street all that long — it debuted in March 2023 — but Google has refreshed its push after it was widely seen as being caught flat-footed by OpenAI’s introduction of ChatGPT in late 2022. Google had elements of its basic GenAI technology in place as early as 2021 but had initially chosen not to make its work public. The initial reaction to Bard was somewhat lukewarm and that hurt the stock of parent Alphabet for a time.

Some basic GenAI definitions help to understand more about how this technology works.

One is prompt. This is the request put to the GenAI software, such as those that Mark Rober made to Google Bard for each step of his paper-plane project. Prompts are often questions, but they can also be in the form of examples, text or even computer code. (Deeper dive: Part of the learning curve that Ally Financial marketing staff went through in experimenting with GenAI was framing prompts.)

Another is tokenization. This refers to way that machine learning learns. As explained by Bhavya Singh on Medium:

“Machine-learning models are just big statistical calculators that deal with numbers, not words. So, before feeding them text, we need to turn words into numbers. This is called tokenization. It’s like giving each word a special number that the model understands from a big dictionary of words it knows.” Tokens may represent multiple words or they may represent parts of words.

Tokenization is important to understand because it relates to a term frequently used to compare large language models. This is the context window. Context windows are measured in terms of tokens. The bigger the context window, the more text input a large language model can consider when framing a response to a query.

Bigger is generally seen as better, as explained in a backgrounder from Hopsworks, an AI consultancy: “Larger context window sizes increase the ability to perform in-context learning in prompts. That is, you can provide more examples and/or larger examples as prompt inputs, enabling the LLM to give you a better answer.” Part of the reason is that being able to handle a bigger “scoop” provides that much more context for the AI to use to understand what’s being referred to.

(“Bigger” is a comparative word and requires scale to mean something. We’ll give an idea of scale in the next section.)

Here are two other GenAI entrants, with links for banking institutions that want to explore them further.

Read more:

Anthropic’s ‘Claude AI’

Claude 2.1 is the latest version of this GenAI offering, from Anthropic. The company was started by former senior staffers at OpenAI after a rift over the company’s direction developed. The founders were concerned about misuse of AI and one of their responses to those concerns was “Constitutional AI,” a set of principles designed to create an AI system that is “helpful, honest, and harmless.”

The latest iteration of Claude, the firm’s GenAI program, doubled the maximum number of tokens that could be put into a prompt to 200,000 tokens. This equates to roughly 150,000 words, or over 500 pages of material. A company blog points out that that could take the form of corporate financial statements, entire databases of computer code, or even long literary works like “The Iliad” or “The Odyssey.”

The company notes that it was the first to be able to accommodate messages of that length. In addition, it claims that it has achieved major reduction in false statements from Claude 2.1 versus Claude 2.

Amazon Q

In late November Amazon announced its latest foray into GenAI, Amazon Q, which it describes as an AI-powered assistant “built for business.” It is designed to solve problems, perform research, generate content, and connect with a company’s databases and systems. The new program is offered through Amazon Web Services, which many banking companies already work with for cloud and other computing services. Amazon Q will come with connector programs for many enterprise software systems and data sources, such as Google Drive and Dropbox. In December 2023, it was in preview mode in certain U.S. regions.

An case provided by Amazon: A marketing manager could ask Amazon Q to turn a press release into a blog post. In performing the task, it could consult the company’s style guide. After the document was ready and posted, Amazon Q would also promote the blog it produced by creating social media posts. The software could then track results of the effort and summarize what happened for management’s review.

A factor to consider is the training base that a particular program uses. Different versions of ChatGPT, for example, have different cutoffs for the data used to train them. Other models, beyond the data they were trained on, are designed to access the internet currently as well, pulling in information that’s up to date.

This article was originally published on . All content © 2024 by The Financial Brand and may not be reproduced by any means without permission.