Evaluating the pros and cons of one of the hottest buzzwords in business, agentic AI, is challenging for no other reason than this branch of AI isn’t yet in daily use.

First, what is it? IBM defines agentic AI as “an artificial intelligence system that can accomplish a specific goal with limited supervision. It consists of AI agents—machine learning models that mimic human decision-making to solve problems in real time.”

Likewise, Amazon Web Services notes that “agentic” indicates agency—the ability of these systems to act proactively and independently to achieve pre-set goals without constant human oversight.

If unleashing a new technology like this sounds both exciting and terrifying, community bankers face still another hurdle: Agentic AI can be a bit of a “black box,” as it’s generally not something developed in-house.

“Most community banks are not using their own AI,” says Greyson Tuck, attorney and consultant at Gerrish Smith Tuck in Memphis. “They’re using AI through what they buy or lease from other providers.”  

5 use cases

Practical applications can make it easier to envision what agentic AI solutions might achieve:

  1. ACH return items. Processing ACH returns is a back-office task that makes sense for agentic AI because these items are handled according to a fairly fixed set of rules. In an agentic AI world, each morning, a piece of tech might evaluate, research and resolve any ACH items that have been returned, flagging those requiring human intervention. 

  2. Credit underwriting. Applying traditional, human-intensive underwriting practices to small-dollar consumer loans is too costly for most community banks, says Tuck. However, if loans meet certain criteria—say, the value is under a prescribed amount and the borrower has a credit score above a specific threshold—then AI agents could be a viable option for approving these loans.  

  3. Fraud prevention. Dylan Lerner, senior digital banking analyst at Javelin Strategy & Research, is convinced that “agentic AI can make fraud and compliance teams more efficient.” He maintains that agentic AI is well positioned as a technology that can be used to flag instances of potential fraud and, when warranted, stop transactions before they’re completed. 

  4. Lerner, who conceives of an AI agent as a sort of “copilot,” notes that AI agents could support banks in identifying activities that warrant suspicious activity reports (SARs) in real time. 

    “One of the fun things about AI is how quickly it works,” he says. “Agentic AI doesn’t have to catch fraud on the back end. … Right then and there, the bank can raise a flag for the teller and even deny a transaction.”

  5. Orchestrating AI processes. Given its ability to analyze which actions are needed for each discrete step within a process, agentic AI has the potential to train various tech solutions to achieve goals set by a community banker, says Madeline Fredin, VP of partnership strategy at Alloy Labs. 

  6. Specifically, Fredin notes that an agentic AI solution might monitor the flow of AI solutions in, say, a bank’s back-office dispute process. Here, she says, “the agentic AI could automate the system and also be the one to run it.”  

    Marshall
    Mickey Marshall
  7. Superior call agents. “Think of [an AI agent] as you would an employee. You could give it a task, give it objectives, and then it will go off and find the best way to complete that task,” explains Mickey Marshall, vice president and regulatory counsel for ICBA. 

  8. An AI agent helping a customer can do far more than a chatbot, which typically answers a single question, says Marshall. “[Agentic AI] operates independently and progresses through reasoning in a way that chatbots can’t.”

Cultivating a community bank feel

Anticipating something like agentic AI can be a stretch for community bankers, who pride themselves on deep local knowledge and a personal touch.

“If your community bank is just an app on your phone and not a flesh-and-blood person down the street, that’s never been a community banker’s angle,” says Dylan Lerner, senior digital banking analyst at Javelin Strategy & Research.

For community bankers who someday employ AI agents, he says there are a number of ways to humanize the experience: “Give [your agentic AI solution] a name. Give it a personality. Give it something that reminds customers you’re talking to your bank and not just talking to a piece of tech.” He points out that when an AI virtual assistant has a distinctive voice (think “Siri” or “Erica”), some of the impersonality melts away.

Another key, says Lerner, is training AI to hand off customers to a live chat agent or bank employee if the customers indicate discomfort or annoyance. “It’s important to connect back to the hometown feel of a community bank, even with something as unruly and big as a piece of agentic AI technology,” he says.

Evaluating risks

Quick Stat

55%

of U.S. adults want more regulation on artificial intelligence.

Source: Pew Research Center

Even those community banks opposed to using agentic AI internally might have to grapple with this technology as their customers begin to employ it. If an AI agent searches out and buys plane tickets on behalf of an individual, for instance, the bank becomes involved via the payments piece.

Or take this example that’s even closer to home: tasking an AI agent with finding and switching to a credit card that meets a user’s criteria, which raises a host of questions given third-party agents are generally prohibited from applying for a credit card on behalf of users.

When it comes to a bank’s customers using agentic AI, the financial institution must make sure precautions are in place to keep end customers’ data safe, says Marshall.

Because AI is still so novel, he strongly recommends crafting an official AI policy that takes into consideration agentic AI, as well as facets of the other technologies involved. He urges bankers to ask themselves, “Do I treat an AI as a customer? And do I have a formal written policy for how to approach all this?”

What lies ahead

With agentic AI pulling together multiple AI technologies, no one knows precisely when it will enter the mainstream, but Marshall suggests it will be sooner rather than later.

“[Not so long ago], generative AI was prone to giving incorrect answers or hallucinating. Now, though, if you use AI, you get pretty good results,” he concludes. “I don’t see the pace of the evolution of this technology slowing down.”

AI Graph

AI agents: A legal vacuum

One reason agentic AI can appear risky is that its evolution is outpacing the oversight functions. “You don’t have a well-settled body of law or history of precedent—or even community banks that have utilized AI and seen how the regulators respond,” says Greyson Tuck, attorney and consultant at Gerrish Smith Tuck in Memphis.

This may soon change, says Anjelica Dortch, vice president of operational risk and cybersecurity policy for ICBA. She notes that this summer, the Trump administration released its AI action plan. “[The plan] updates the IRS code to allow businesses like community banks to get a tax credit for upskilling their employees around AI,” she says.

Another wrinkle is that most community banks are using third-party vendors instead of developing agentic AI solutions themselves.

“We need an elevated level of due diligence around third-party AI vendors,” says Dortch. When implementing a new piece of tech, she advises asking: “’Is this considered AI? Are there agentic capabilities within it? And can you describe those?’ It’s really breaking down what you’re getting inside these tech stacks and knowing the risks involved.”

Finally, she urges community bankers to be clear about what risk levels they can tolerate: “If this is a brand-new fintech capability, you could be the guinea pig for [agentic AI]. And you probably don’t want that for your institution.”