Generative AI has become a popular, easily accessible tool that can write reports, music or even fix computer code. But financial institutions are also dealing with the darker side of generative AI, including more sophisticated cybercrime and employee missteps that may expose confidential information.

ChatGPT has become the poster child for generative AI, but there’s a rapidly growing list of user-friendly generative AI tools, including Synthesia for video and Replica for audio formats. 

However, it’s still early days for bank employees who are starting to use generative AI within the workplace. They might be looking for efficiencies, such as using the tool to write a blog for a social media page or research a topic for a presentation. Yet generative AI is moving more and more into everyday use at an explosive pace that is keeping bank leadership teams—not to mention regulators—on their toes.

“The biggest question for bank executives is how do we really take this bull by the horns and get a layer of control?” says Saroop Bharwani, cofounder of Senso.ai, a firm that aligns enterprise knowledge with large language models for the financial services industry, and cofounder of First Principles AI, an initiative dedicated to providing executives in the financial services industry with the knowledge and skills needed to compete in the age of AI. That control, he says, starts with education and really understanding the technology. 

How does generative AI work?

One of the most pressing issues surrounding the use of generative AI in highly regulated industries is the potential for confidential data to be exposed.

Generative AI learns from existing information—everything it produces is based on publicly available articles, images and other content—so if an organization doesn’t have safeguards in place, any information they put into a generative AI tool can be picked up by other tools. 

Knowing the safeguards that are available is, therefore, critical. For example, ChatGPT introduced a feature where a user can enable or disable chat history. If chat history is enabled, the user is basically giving OpenAI—the chatbot’s developer—the right to use the data entered to “retrain the model,” meaning it becomes part of the universe of data that ChatGPT can access. 

Samsung is one company that learned its lesson the hard way. It banned the use of ChatGPT after employees revealed sensitive company information to the chatbot. “On the data privacy side, every bank should be concerned about employees putting data into the ChatGPT interface with chat history [and training] enabled,” says Bharwani.

Creating a generative AI policy

Companies across industries are looking at creating policies around the use of generative AI tools, and the fundamental question is: Should they allow it at all? There isn’t a one-size-fits-all answer. 

“We don’t really want to put out edicts that say, ‘Thou shall not use these tools.’&rdquo
—Barb MacLean, Coastal Community Bank
Barb McLean
Barb MacLean, Coastal Community Bank

When developing generative AI policies, community banks need to think about their own risk tolerance, the culture of innovation within their teams and what kind of guardrails or parameters are appropriate, advises Barb MacLean, senior vice president, head of technology operations and implementation at Coastal Community Bank in Everett, Wash.

The $3.5 billion-asset community bank takes a transparent, deliberate approach to generative AI tools such as ChatGPT. “We don’t really want to put out edicts that say, ‘Thou shall not use these tools,’” says MacLean. The bank is having open discussions around generative AI and is focusing on educating and teaching teams how to use these tools safely. 

The two primary risks of generative AI in banking

There are two major downside risks to generative AI: data privacy and bad actors using AI in fraud and cyber attacks. Bharwani advises community banks to look at each category separately and implement strategies around those risks. 

On the data privacy side, many organizations are simply blocking ChatGPT. However, there is nothing preventing a bank employee from going to their personal device and putting information about the bank into the chat to get answers.

Instead of blocking generative AI altogether, community banks should consider developing a framework to create more visibility around its use, which is where AI governance and data management policies come into play. 

“It really starts with the policies,” Bharwani says. “What are the policies regarding our employees on how they should and should not leverage these systems?”

Bharwani advises that clients set up accounts for employees on generative AI so that they are better able to monitor that use. He notes that redaction capabilities and information rights controls are becoming more common. Another step is to hold staff training sessions that make people aware of the policies around how they can and can’t use these tools.

Although there is a lot of excitement around the potential of generative AI to create efficiencies, it is also accelerating the evolution of cybercrime, fraud and scams. “These chatbots are coming to the point where they can mimic human interactions very, very well,” says Bharwani. 

AI cybersecurity challenges

For example, generative AI can scrape social media to gather personal information and create a fake persona that sounds like someone you might know. Imagine a customer who receives a phishing email that is mimicking a bank loan officer and is asked to disclose personal financial information, or even wire funds to a fake title company in advance of a closing. Generative AI also can scour the internet for audio clips to replicate voices. 

“What we’re going to see is criminals taking all of these old scams, and we’re going to see these replayed on digital media to people,” notes Barry Thompson, managing partner at Thompson Consulting Group, LLC. The concern is that bad actors can use generative AI to learn how to be more effective at what they do, with scams that are more realistic and difficult to detect.

Stepping up education on generative AI

Just as cybercriminals use generative AI to get smarter, community bank leaders can take advantage of the same tools to shore up their defenses and manage downside risks. For example, banks can leverage generative AI tools as a resource for identifying things like check fraud, notes Thompson.

“It is a lot of information, but we’re trying to break it down and understand it, because it is more on the forefront now, and we’re using a lot more AI than we think we are.”
—Samuel L. James, Bank of Zachary
Samuel James, Bank of Zachary
Samuel L. James, Bank of Zachary

Banks also need to stay current on the scams and continue to educate customers on basic security practices. Community bankers should make it clear to customers that if someone who appears to be from the bank texts, emails or calls asking for personal financial information, they should call the person back at the main bank phone number to verify that it’s a legitimate request.

In an environment where generative AI is moving incredibly fast, community banks are working to keep up with the latest information and changes. 

“We’re trying to make sure we can drink everything from the firehose that we can possibly drink,” says Samuel L. James, senior vice president, enterprise risk management at $400 million-asset Bank of Zachary in Zachary, La. “It is a lot of information, but we’re trying to break it down and understand it, because it is more on the forefront now, and we’re using a lot more AI than we think we are.”

More from ICBA

Generative AI is the hottest of topics, and ICBA has resources that can help you make sense of it.

  • There will be a Generative AI & ChatGPT session at the ICBA LEAD FWD Summit, Sept. 18–19 in Kansas City, Mo.

  • Look out for the “Demystifying Generative AI & ChatGPT” webinar series, launching soon!

The Bank of Zachary’s approach to generative AI risk

“It’s really difficult to understand whether an attack is happening through AI or not,” says Samuel L. James, senior vice president, enterprise risk management at Bank of Zachary in Zachary, La. “So, what we are constantly doing is being aware of the fraud-related activities in general.” 

He says that many of the phishing, romance and family member scams out there are likely being generated by AI and are becoming more realistic and personal. “People fell for the bad product, and we have to assume that it will get [more sophisticated],” James adds.

The Bank of Zachary places emphasis on the importance of verification: Customers shouldn’t just read an email or text and assume it’s true. They should pick up the phone and call that person. “The biggest takeaway is that we still have to do the basic things,” says James.