Generative AI—often abbreviated GenAI—has made its mark among new technology. According to the McKinsey Global Survey on AI, 65% of respondents report that their organizations have adopted GenAI in at least one business function—a leap up from 33% in 2023. McKinsey found that more than three-quarters of respondents predicted that GenAI will lead to significant or disruptive change in their industries. 

The firm further estimates that the technology could add $200 billion to $340 billion across the banking industry if predicated use cases are fully realized. In banking, one of those use cases is in cybersecurity. GenAI can help banks more quickly identify potentially fraudulent activity and security flaws that a human might not spot. 

Despite all the hype, it’s still new, which is why most community banks are talking about the technology but not necessarily implementing it just yet. “A lot of folks are in the discovery stage right now, and that’s prudent,” says Scott Anchin, vice president of operational risk and payment policy at ICBA. Right now, a community bank might try a GenAI-powered chatbot—which is clearly labeled as AI—but only be dipping its toe into potential security uses.

At the same time that this technology is being presented as a possible security bolster, the power of GenAI can be harnessed by bad actors too. It can help them to create more effective phishing attacks to try to break into banks, replicate customer voices to bypass voice recognition security measures, or even impersonate bankers in the C-suite to trick employees into sending money where it doesn’t belong.

Here’s what community banks need to know about GenAI in security right now.

The early days of GenAI

Toes

Artificial intelligence itself isn’t new—the first AI program was presented at a conference at Dartmouth in 1956. But GenAI, which uses algorithms to create new content, is. It exploded into wide use shortly after ChatGPT, a type of GenAI, was released to the public in November 2022.

Right now, a lot of the chatter about GenAI in business is about how it could be used versus concrete, proven use cases. That’s especially true in highly regulated industries like banking. Dana Twomey, director and head of financial risk services and compliance at digital services firm West Monroe, says her clients are mostly excited about the potential of GenAI but are also wary of what they don’t know about the technology. 

“It’s relatively early days across the industry, and it’s very early days in the community bank space,” she says. 

Most community banks are what Twomey calls “fast followers,” meaning that they don’t want to be on the bleeding edge of technology, either because they can’t afford to develop it on their own or want companies capable of taking bigger risks to test it out first. They “don’t want to get sideways with my regulators, especially in community banks,” she says. One misstep could prove costly, and community banks “don’t want to do something that would impact customers.”

But she uses “fast” in “fast followers” for a reason. As soon as the technology is proven and more commonly adopted by larger institutions, community banks will almost certainly get on board. “As that adoption turns out the ROI it claims, I think it will ramp up very, very quickly,” she says. 

Jon Sandoval, chief information officer for $2.3 billion-asset Sunrise Banks in St. Paul, Minn., says that there’s “a lot of hype right now, but there’s hype for a good reason. Everyone sees the potential of it.” 

But right now, it’s mostly potential. He continues, “As it matures, we will see a lot of beneficial use cases and embrace it.”

Can AI beef up cybersecurity?

When it comes to cybersecurity, financial institutions are increasingly under attack. GenAI could step up to help, in a few ways.

First, it can be used to review code, says Karl Falk, founder and CEO of Botdoc. “In any large platform, you have millions and millions of lines of code,” he says. While humans can find errors, it’s impossible to spot everything. GenAI-powered scans can look for errors and fix them before they become problems for end users or security holes through which attackers could enter. 

GenAI chatbots can also be deployed to test community bank employees to make sure they’re not skirting around security measures already in place, says Falk. For example, it can pose as a customer that wants to just text sensitive information directly to a banker instead of using a secured file sharing system. Through this kind of AI-generated testing, security teams can know if the measures put in place are being followed and which employees might need additional training to stick with the security program.

It can also be used for threat detection and analysis to look for patterns that might indicate various kinds of fraud, according to David Brauchler, principal security consultant for NCC Group, a cybersecurity consulting firm. It can do that by analyzing things like network traffic and user behavior to “help community banks recognize potential malware or irregularities,” he says.

One way GenAI could do that is by identifying synthetic IDs that are used to open bank accounts to perpetuate fraud, says Anchin. It could also help address fraud by detecting patterns associated with falsely seasoning one of these accounts to make it look legitimate.

Sandoval sees the potential in using GenAI to “very quickly explore options of threat actors.” He notes that GenAI could generate various use cases or scenarios on “how bad actors could get into your systems and how [they could] breach your data.” That way, security teams can figure out how they would potentially respond to such threats and bulk up their security, so those cases stay only theoretical.

Most likely, community banks have already seen their core vendors offering GenAI enhancements, says Anchin, as some third-party vendors have recently gone to market with AI solutions. “Choice is always a good thing, but with great choice comes great responsibility,” he says.

That means, as a community banker, it’s vital to understand exactly what will happen with the data used by any GenAI platform. Is it going to be kept in a community bank’s environment where it can only be reached by the community bank using it? Or is it going to be sent to a cloud environment, where the vendor could then use it to further train their AI models? 

That latter scenario is fairly common, as most vendors will “want to consume that data back to train their models,” says Twomey. “From a banking perspective, is that something you’re willing to agree to? Not every bank is.” Sending such sensitive data out of a bank’s control can anger customers and present compliance problems. 

Planning is imperative

Shield

Community banks need to start thinking through these problems before deploying any GenAI, especially in a security capacity, Twomey says. Setting a solid AI strategy and foundation “not only helps internally in terms of defining strategy for AI, but it helps provide auditors and regulators with essentially a road map of how institutions are going to do this as securely as they possibly can.” 

It can also help address any customer concerns about these tools being used by their bank. 

That also means looking at how employees may be using GenAI already, with or without any official company policy. “Oftentimes companies are allowing their employees to use ChatGPT or other GenAI models. However, they could be inputting sensitive or confidential data for that organization,” Sandoval says. “Somebody could say, ‘Generate a report that has this data in it.’” 

But if there are no guidelines or instructions how to use or not to use it, they could be inputting sensitive financial or customer-identifying information into the algorithm. “It’s critical for organizations to ensure they have a governance process in place to monitor and prevent that type of information from being input,” he says.

As part of that process, whoever is leading any AI-related strategy should “make best friends” with their legal, governance, privacy and security teams, says Brauchler. These officials can ensure that whatever is being talked about as potential, especially when it comes to analyzing customer data with GenAI, meets the real-life requirements for that data and nix any potential vendors who don’t meet a bank’s strict requirements. 

Potential AI-fueled threats

Just as AI can be used by community banks to build better defenses, it can be deployed by malicious actors to build more powerful attacks. 

For example, GenAI is being used to sharpen spear phishing attacks, which try to trick someone into handing over important information by fraudsters pretending to be someone else. Typically, says Sandoval, these emails can be identified by those who receive them because they are written by speakers of another language, and the language doesn’t read right. However, GenAI is helping scammers write cleaner translations. 

Brauchler says GenAI can also scrape someone’s social media accounts. That means bad actors don’t just know who someone’s boss is, but also where someone’s kids go to school or what they were doing last week—any detail that might make an attacker’s email seem more real. “You are more likely to respond to it because it recognizes aspects of your personal, individual life and collects these data points,” he says. “Human brains are susceptible to trusting people or, in this case, machines that have information that only trusted individuals should know.”

Threat actors can also use GenAI to create more realistic clones of someone, he added. For example, earlier this year, hackers stole $25 million from a company by creating video deepfakes of employees, including the chief financial officer (see sidebar below). “They had [not verified] that these people weren’t who they said they were,” Brauchler says. 

How have fraudsters used generative ai maliciously?

The prospect of using GenAI to pretend to be someone to steal money isn’t just a future problem.

In March 2024, cybercriminals used AI deepfakes to pose as the chief financial officer and other employees of Arup, a U.K.-based engineering group. Staff were prompted by these fraudsters to transfer $25 million into bank accounts in Hong Kong during a video conference, according to The Financial Times.

As of press time, it’s believed to be the world’s largest deepfake scam.

He notes that's also a problem for community banks that use things like voice recognition to verify customer identity. “I could create an almost identical duplication of your voice.” 

Community banks can combat some of these increased threats by making sure they have good cyber hygiene, including ongoing employee training to spot phishing attacks and simulated phishing campaigns to see who falls for them. 

To ensure someone’s voice can’t be cloned then used maliciously, Brauchler recommends community banks always use additional method, beyond just voice recognition, to verify someone’s identification. 

With both cybersecurity professionals and fraudsters looking to tap into the potential of generative AI, it’s going to be a bit like a “spy vs. spy” situation, with both sides picking up new tools to advance their end of the cybersecurity fight—even if community banks are just in the evaluation period right now.

“When you start to think about cybersecurity and fraud protection and things like that, you’re bringing the need for more scrutiny over products that you use and policies and procedures you adapt around those things,” Anchin says. “It’s a scale, and you don’t see many institutions relying super heavily on AI yet.”

Word from the White House

In October 2023, the Biden administration released an executive order intended to address both the potential good and bad effects of AI. The order included a bevy of instructions, including that, in accordance with the Defense Production Act, developers of AI systems must share their safety test results and other critical information with the U.S. government. The order also directed the National Institute of Standards and Technology to develop standards, tools and tests to help ensure that AI systems are safe, secure and trustworthy.

Additionally, the order stated that the federal government will enforce existing consumer protection laws to “appropriate safeguards against fraud, unintended bias, discrimination, infringements on privacy and other harms from AI,” and that such protections are especially important in critical fields, including healthcare and financial services “where mistakes by or misuse of AI could harm patients, cost consumers or small businesses, or jeopardize safety or rights.” To read the full executive order, go to whitehouse.gov