Generative AI is kicking phishing attempts up a notch, as fraudsters now have more ways to easily impersonate real people by more closely mimicking their writing style, voice and even their personas via deepfake videos.

Mickey Marshall
Mickey Marshall

“Generative AI is a relatively new technology, and like any sort of new technology, it has beneficial uses, like making work easier and making employees more productive,” says Mickey Marshall, ICBA’s assistant vice president and regulatory counsel. “But it also has nefarious uses, and it’s being put to use by criminals to engage in various kinds of fraud.”

The types of generative AI fraud are similar to the kinds of phishing scams that have been on the scene for some time, Marshall says. Common examples include when fraudsters impersonate bosses and demand that an employee wire money immediately to a “customer’s” overseas account that actually belongs to the fraudster. Another example is a phone call from a “family member” who is stranded on a highway with a broken-down car and needs money for a tow truck.

However, these tech-savvy fraudsters have a leg up on traditional fraud methods. “Simulated voice and video add another emotional layer that’s going to make people more likely to act on that fraud,” Marshall says.

Spotting deepfakes

For now, the technology for generating voice and video doesn’t provide perfect fidelity to what a real person would necessarily look or act like, so detecting and thwarting generative AI phishing attempts by carefully scrutinizing emails, calls and videos can still be done.

“A big part of it is trusting your instincts, and if something seems off about the video or a call that you’re receiving, don’t ignore it,” Marshall says.

For example, a deepfake video may not perfectly mesh with the person the fraudster is attempting to impersonate. There may be temporary glitches or temporary freezes in parts of their face, or the body may not be anatomically correct.

“It sounds silly, but are there too many fingers? Does this person have too many teeth? Are the shadows matching up right? Is there too much glare on their glasses, or no glare on their glasses when there should be?” Marshall says. “If there are things that don’t look quite natural, that can be a warning sign that some AI generation is being used.”

Community banks should train their employees and customers that if they are in doubt about an email, phone call or video, they should ask the person something that would confirm their identity.

“Not something that’s publicly available information, like what high school they went to,” Marshall says. “But if you have some memory or shared experience with that person, ask them in detail about that, something that wouldn’t be easy to look up. That can be another way to confirm if they are the real person rather than a fraudster.”

Written generative-AI fraud spotting

The potential of generative AI

Even as concern surrounding fraud grows, generative AI’s usefulness in the banking sector is apparent. With the current adoption rate, McKinsey & Company Financial Services predicts generative AI could provide banks with significant value increase—between $200 billion and $340 billion. More specifically, retail banking, corporate banking, and risk and legal are expected to see the greatest benefit from this advancing technology.

As generative AI gets more advanced, typical red flags in phishing attempts are eliminated, such as weird punctuation, grammatical errors or just words that don’t make sense, which is often because the attempt is coming from someone who isn’t fully fluent in English, says Lance Noggle, ICBA senior vice president, operations and senior regulatory counsel.

“Now, it’s going to come out as a good convincing email, and then the fraudster can insert their malicious link, attachment or what have you,” he says. “But there are other things that banks can look for and things they can still do to make sure that the emails or the phone calls they’re getting are legitimate.”

Noggle suggests that when scrutinizing emails, bank employees and customers should ask themselves whether the person in question would send them such an email. They should also examine the email address. In addition to the name, is everything else exactly the same as the typical email they would receive from this person?

Moreover, if the email is supposedly from their boss working in the same city but is sent during the middle of the night, would that make sense? More likely, it’s coming from someone living in a country halfway across the world.

The same best practices for thwarting standard phishing attempts still come into play, like reaching out to the person in question in another way to ask them if they actually sent the email, Noggle says. If the person says yes, the bank employee or customer should then verify something about the person to make sure they are real before they take any action.

Voice impersonation scams

If it’s a phone call, unless the fraudster has somehow spoofed the person’s number, they are likely calling from a different number, Noggle says.

“After you listen to what they say, hang up and call the person being impersonated from a number that you know is actually theirs to find out if they are the one calling on the other number or whether they are, indeed, being impersonated by a fraudster,” he says.

The key to thwarting generative AI phishing attempts is to always scrutinize emails, phone calls and videos that seem “a little off,” and then find ways to verify identities before clicking a link, opening an attachment or following instructions to send money.

“Unfortunately,” says Noggle, “generative AI fraud is a new landscape. But fortunately, if you’re following the same kind of standard best practices, it shouldn’t, hopefully, have too much of an impact.”