Deepfake fraud is on the rise, and so are regulatory expectations that banks spot, mitigate and report attempts. While such fraud is becoming more sophisticated, there are subtle tells that indicate something is amiss, says Scott Anchin, ICBA’s senior vice president of strategic initiatives and policy. 

“For example, in a video, lighting or shadows might be inconsistent, or voices might not line up with lip movements,” Anchin says. “Over the phone, monotone speaking or a lack of breathing could indicate the speaker is not real.”

In November, the Financial Crimes Enforcement Network (FinCEN) issued an alert to help financial institutions identify fraud schemes associated with the use of deepfake media created with generative AI tools. In the alert, FinCEN also spotlights common red flags of possible deepfakes, details methods to detect and mitigate attempts—and reminds financial institutions of their reporting requirements under the Bank Secrecy Act (BSA).

Doing your part

Filing suspicious activity reports (SARs) on possible deepfakes allows banks to alert law enforcement and regulatory agencies about potential criminal activity, such as money laundering, terrorist financing and fraud, according to Rhonda Thomas-Whitley, senior vice president and senior regulatory counsel for ICBA.

“Generally, the report is designed to document suspicious transactions and behavioral patterns and is used by law enforcement in their investigation of illicit activities,” she says. “Some of the ways in which FinCEN uses SAR data is to help identify fraudulent schemes and trends, to help pinpoint areas that have high fraud activity, and to help identify and disrupt criminal networks.”

For possible deepfakes, financial institutions should include a detailed description of the known or suspected violation of law or suspicious activity, she says. (See sidebar.)

FinCEN’s deepfake red flags

  • A customer’s photo is internally inconsistent (such as if it shows visual tells of being altered) or inconsistent with their other identifying information (such as if a customer’s date of birth indicates they are much older or younger than the photo would suggest).

  • A customer presents multiple identity documents inconsistent with each other.

  • A customer uses a third-party webcam plug-in during a live verification check. Alternatively, a customer attempts to change communication methods during a live verification check due to excessive or suspicious technological glitches during remote verification of their identity.

  • A customer declines to use multifactor authentication to verify their identity.

  • A reverse image or open-source search of an identity photo matches an image in an online gallery of faces produced by generative AI (GenAI).

  • A customer’s photo or video is flagged by commercial or open‑source deepfake detection software.

  • GenAI-detection software flags the potential use of GenAI text in a customer’s profile or responses to prompts.

  • A customer’s geographic or device data is inconsistent with the customer’s identity documents.

  • A newly opened account or an account with little prior transaction history has a pattern of rapid transactions; high payment volumes to potentially risky payees, such as gambling websites or digital asset exchanges; or high volumes of chargebacks or rejected payments.

To learn more, visit fincen.gov

While filing SARs on possible deepfakes, FinCEN requests that financial institutions reference its alert by including the key term “FIN[1]2024-DEEPFAKEFRAUD” in SAR field 2 (“Filing Institutions Note to FinCEN”). 

In the SAR’s narrative section, institutions should indicate a connection between the suspicious activity being reported and any information within FinCEN’s deepfake alert, referencing any applicable key terms indicating the underlying typology.

Taking extra precautions

When suspecting possible deepfakes of fraudsters posing as legitimate customers, Isabella Bank in Mt. Pleasant, Mich., uses several verification tools that include standard questions for the individual presenting the questionable information or document, says Jenn Brick, vice president and director of customer service operations at the $2 billion-asset community bank.

“However, that’s not always foolproof because, unfortunately, fraudsters are really good at obtaining information,” Brick says. “So, we will then ask some additional questions on top of our typical verification questions before acting on any requests to make changes to the account.”

For example, Brick and her team might ask questions about recent transactions that the legitimate customer conducted, such as what the transaction was for and how much was the total amount—“types of things that are very specific to that customer that’s not easy for fraudsters to obtain,” she says.

When educating customers how to spot and thwart potential deepfake attempts on them directly, Isabella Bank recommends that customers ask the individual who might be faking the voice to ask them questions that only the real person would know.

FinCEN’s alert also details some of the methods that banks can take to detect and mitigate possible deepfake fraudulent attempts. 

Taking up tech

Fortunately, technologies to detect deepfakes are keeping pace with fraudsters’ innovations, Anchin says. For example, new solutions use artificial intelligence and machine learning to determine whether an individual in a video call is truly live.

“Tools that analyze transactions to detect anomalies can also be used to identify activities that are out of the ordinary, regardless of whether a speaker is a real human or a deepfake,” he says. “Biometrics and multifactor authentication can help, too.”

Beyond demonstrating compliance with BSA regulations by filing SARs on possible deepfakes, banks can also help FinCEN identify emerging patterns of fraud, enabling the agency to determine whether specific incidents are connected to larger schemes, and to draw other related conclusions about suspicious activity.

“Filing SARs enables community banks to play an important role in helping the ecosystem to prevent and detect fraud and scam activity,” Anchin says.