Greg Ohlendorf
Greg Ohlendorf

During our interview with Greg Ohlendorf, he was interrupted with revisions to the AI policy that his community bank first laid out in 2023.

“My CIO just literally put an updated copy of our AI intelligence policy on my desk while we’re talking, with redline changes,” says Ohlendorf, president and CEO of $201 million-asset First Community Bank and Trust in Beecher, Illinois. “That’s how fast this is happening.”

Creating a robust AI policy is critical for community banks due to the sheer amount of sensitive information they handle. If vendors are injecting AI into their software, community banks need to make sure they have guardrails in place so that their customer data remains safe and the bank stays within the bounds of regulatory and legal compliance.

Charles Potts
Charles Potts

“That third-party risk becomes a big issue in terms of how these tools may or may not have some applicability inside of them, and how they interact with all data,” says Charles Potts, ICBA senior vice president and chief innovation officer.

Likewise, employees using AI on their own may be putting customer information into AI applications where it might not belong. Laying out guardrails for how this technology should and should not be used can save community banks a world of hurt if something goes wrong.

How to get started with a community bank AI policy

While there is no one standard yet of what an AI policy looks like, community banks can start by writing out where AI already is, where it can be used and where it can’t. The policy should also address how to make sure that any AI used by the bank stays within compliance of fair lending laws and how biases are not introduced into and perpetuated through an AI, even if the policy states what a third-party vendor using AI is doing to ensure both those things. 

Another key element of a policy is deciding how employees can use AI and then training them on what they can and cannot do. Once that’s established, bank leadership needs to keep reinforcing and evolving the policy. 

Here’s what community banks can do now to understand where they’re already using AI and how they can set up policies around it.

1. Figure out where AI is already being used in your bank

AI’s evolution in the banking industry

1950s
The terms “artificial intelligence” (1955) and “machine learning” (1959) are introduced.

1960s–1970s
Early AI advancements include chatbot development and advances in speech recognition.

1980s
Banks adopt AI-driven “expert systems” to offer specialized tax and financial advice.

1990s
AI-powered rule-based systems emerge for fraud detection, like the FinCEN Artificial Intelligence System (FAIS).

2000s–2010s
AI virtual assistants provide tailored financial guidance via online or mobile bank platforms. Machine learning replaces rule-based systems for fraud prevention.

2020s
AI transforms banking, aiding in market prediction, customer support and fraud mitigation.

The first thing to know is that even if your community bank hasn’t created an AI strategy or signed contracts with AI-specific vendors, the technology is already in your bank and has been for some time.

“Whether you call it AI or not, your card network has been using a neural network to find fraud for the last 15 years, probably longer,” says Ohlendorf. Generative AI may be the newest application of the technology, but well-established technologies like neural networks and machine learning also fall under the umbrella of AI or AI-like technologies.

So, community banks first need to figure out where the AI technology already is. That starts with talking to software vendors about if and where AI has been deployed in any of their solutions, says Ohlendorf. “Is it on automatically? Or is it a feature that I have an off switch for?” 

For example, in doing such an audit, First Community Bank and Trust leadership made sure AI stayed turned off in a portal they used for board meetings, because they didn’t want that information to make its way into a large language model’s pool of training data. But they also found that AI in other areas was useful to the bank. 

“If it’s on by default, and you’re comfortable with it and acknowledge it in some kind of governance policy,” Ohlendorf says, “then it’s good to go.” 

Once banks have a better idea of where AI is already being used, they can work beyond just an AI policy and look at the benefits and solutions that might be available to help the bank, while making sure that they align with their guidelines. 

2. Set guidelines for bank employee use of AI

The next step is for community banks to learn about how their employees are using AI, whether through third-party vendors the bank already uses or through outside platforms where there is little control over what happens to information once it’s put into a model. 

Employees may also be using AI tools on their own, possibly without realizing the broader implications of doing so, says Zach Duke, CEO of cybersecurity and governance business at governance platform provider Finosec. 

For example, someone at the bank may use a spreadsheet with customer data to create a targeted list of customers to market a certain product to. To do so, they might put that data into a third-party AI platform like ChatGPT or Gemini.

“Now, all of a sudden, they’ve taken the spreadsheet that has customer information and account information, and it’s all out there,” says Duke. “How does the institution even know if that happened? How do they control that process? How do they make sure about having checks and balances in what is acceptable and what is not?”

A data governance policy should say that “you don’t take customer data out into [external AI platforms], even if you think you’re just doing something for a client or for your boss,” says Devon Kinkead, executive chairman of Micronotes.ai, a financial technology company offering marketing automation solutions to depository institutions. 

But if a community bank doesn’t tell that to their employees, how would they know what they can and can’t do? 

“We’ve got to make them aware once we put up guardrails,” says Ohlendorf. “You can’t just bring ChatGPT and play with it in the bank without express written permission.” 

For example, First Community Bank and Trust talks about AI policy through everyday employee communications, so that it’s naturally integrated into the normal workflow of keeping workers informed.

3. Assess the customer impact of AI

Importantly, a bank AI policy should address customer trust, especially at a time when most people are generally wary of the technology. Only 26% of consumers say they trust organizations to use AI responsibly, according to a January 2025 report by XM Institute. 

This leads to a dilemma, says Potts. “I need to guard the hen house and protect this stuff,” he explains, “but what am I doing with it, and how am I communicating with my customers?”

The policy can also address rules for if the AI is for internal use only, or if it’s being used to interact with customer data, says Duke. “If it’s back office and just used by the team, it’s foundationally different,” he notes.

A policy should also include where humans are involved in the process, to make sure AI results aren’t inaccurate, especially if it’s supposed to interact with customer data.

4. Ensure your plan evolves as AI does

Beyond the AI banks already have, software vendors are introducing AI components into their normal workflows. For example, Microsoft 365 introduced its AI Copilot in March 2023, and Google searches now come back with AI‑generated answers. 

“All software companies are becoming AI enabled,” says Todd P. Michaud, CEO at AI-based work intelligence and automation company HuLoop Automation Inc. “There’s no way for a community bank to not have AI [somewhere] in their business, no matter their best efforts.” 

That’s why it’s important to regularly review and update your bank’s AI policy as you adopt new technology or as existing technology is enhanced with AI. The good news, says Michaud, is that the AI evaluation process isn’t much different than how banks already work with any potential technology they want to buy: You evaluate both the potential security risks and what it can do for the bank. “That buying process still has to focus on what’s the value and what’s the outcome,” he adds.

First Community Bank and Trust has gone through five revisions of its AI policy so far, and the policy has expanded from half a page to three. 

“We have other policies that haven’t been updated in 10 years and don’t need to be,” says Ohlendorf. “This isn’t one of those.” 

AI is here to stay, meaning a comprehensive and evolving AI policy is critical for community banks. Kinkead points out that his company can process hundreds of millions of data pieces and make sense of them in real time because of AI. 

“This is very real,” says Kinkead. “This isn’t science fiction. We’re doing this today, and it’s all based on big data and AI.”