AI Is Coming for Your Wire Transfers: A Practical Defense Guide for Small Professional Services Firms (2026)

May 5, 202612 min readBy The Crossing Report

Published: May 5, 2026 | By: The Crossing Report


Summary

  • AI-enabled fraud surged 1,210% in 2025. Voice cloning fraud rose 680% year-over-year. (Brightside AI)
  • 800 accounting firms were targeted in a single 2026 campaign using AI-generated emails referencing firm-specific registration data. Click rate: 27% — ten times the industry average. (Brightside AI)
  • Average loss per incident exceeds $500,000 for small businesses. You do not need to be a Fortune 500 company to lose real money.
  • One rule stops most attacks: any wire transfer request — regardless of who it appears to come from — requires a callback on a known, pre-existing phone number before execution.

The Threat Is Real and It's Targeting Firms Your Size

In early 2026, Brightside AI documented a fraud campaign that targeted 800 accounting firms. Not large national firms with full IT departments. Small accounting firms — the kind with 8 to 20 staff and an office manager who handles the wire transfers.

The attackers used AI-generated emails referencing each firm's specific state bar registration number, recent filings, and in some cases, client names scraped from publicly available sources. The emails looked like they came from the IRS, a state licensing board, or a known vendor. The click rate was 27%. The industry average for phishing is 2 to 3%.

That gap — between what phishing used to look like and what AI-generated phishing looks like now — is what makes this moment different from every previous cybersecurity conversation you've had.

The Journal of Accountancy published "AI risks CPAs should know" in February 2026 and followed with "Elder fraud rises as scammers use AI" in April 2026. CPA Practice Advisor ran "AI Social Media Scams Are Coming for Your Accounting Firm" on March 25, 2026. The trade press is covering this actively. Your clients are reading about it. The question is no longer whether this is a real threat to firms your size. It is. The question is: what exactly do you do?

This post gives you the straight answer.


Three Attack Types Your Firm Will Actually See

1. Voice Cloning (Vishing)

Vishing — voice phishing — is a fraud technique in which an attacker calls a target while impersonating a trusted person. AI has made this dramatically more dangerous.

Voice cloning software can now replicate a specific person's voice from as little as 10 seconds of audio. That audio doesn't need to come from a private source. A partner's recorded biography on your firm's website, a presentation uploaded to YouTube, a voicemail you saved — all of it is source material.

The resulting call sounds identical to the real person. An employee who would hang up on a generic robocall will often comply with what sounds like their managing partner calling from a client meeting, saying there's an urgent wire transfer that needs to go out before 5 PM.

Voice cloning fraud rose 680% year-over-year in 2025 (Brightside AI).

2. AI-Generated Spear Phishing (BEC)

AI business email compromise (BEC) is the email version of the same attack. The difference from classic phishing is specificity. Old phishing emails were easy to spot: generic, often poorly written, from obvious fake addresses.

AI-generated spear phishing emails reference your firm's state registration number. They mention a recent filing or a client you've worked with. They use your firm's proper name, address, and the correct names of your partners. They contain no grammar errors. They look like a communication from your state licensing board, or from a client's CFO, or from a vendor you actually use.

The Brightside AI 2026 campaign that hit those 800 accounting firms used exactly this method.

3. Deepfake Video

Deepfake video impersonation is still emerging but documented. In real-time video calls, AI can now overlay a different face and voice onto the caller's image. A staff member on a video call with what appears to be a known contact — a client's CFO, a partner, a bank representative — may be looking at a real-time AI impersonation.

The most high-profile case: Arup, the engineering firm, lost $25 million in a single incident when a finance employee was instructed by what appeared to be the CFO during a video call. The call was entirely AI-generated.


What a Real Attack Looks Like

Scenario A: The Partner Call That Isn't the Partner

It's 4:15 PM on a Wednesday. Your office manager, Sarah, picks up her phone. She sees the managing partner's name on the caller ID.

The voice is his. The cadence is right — she's worked with him for six years. He's traveling, which is why he's calling instead of walking over. He sounds a little rushed.

"Sarah, I need you to process a wire transfer before close of business today. We have a settlement going through — the client needs the funds to land by 5 PM or the deal falls through. I'll explain everything when I'm back tomorrow. Here's the account number."

Sarah has done wire transfers before. She knows the managing partner. She trusts the voice. She writes down the account number.

The managing partner was in a deposition three floors down. He never made that call. The voice was assembled from the audio recording of his firm bio that's been on the website for four years.

By the time Sarah reaches him at 4:45 PM, the wire has gone out. There is no settlement. There is no client waiting for funds. There is no way to recover the transfer.

Scenario B: The Urgent Client Email That Knows Too Much

It's Monday morning. Your billing coordinator receives an email that appears to come from the CFO of one of your largest clients. The subject line references your firm's actual engagement number — a number that appears in a state-level filing that is publicly accessible if you know where to look.

The email explains that the client needs to redirect this month's retainer payment to a new account number. There's been a bank change, effective immediately. The email includes the client's correct firm name, your engagement details, and a plausible explanation about a bank merger. The email address is off by one character — "cfoo" instead of "cfo" — but it's in the To field, not the From field, and the display name shows correctly.

Your billing coordinator, processing end-of-month payments in a busy window, forwards the email to accounts payable with a note: "Updated banking per client CFO."

The retainer goes to the fraudster's account. The real client's CFO learns about it when their payment doesn't arrive. You are now in a conversation about liability, client trust, and your professional responsibility coverage.


The Numbers You Should Know

Stat Source
AI-enabled fraud up 1,210% in 2025 Brightside AI
Voice cloning fraud up 680% YoY Brightside AI
Average deepfake fraud loss: $500K+ (large enterprise $680K avg) Brightside AI
Arup lost $25M in single deepfake video incident Public record
UK energy firm transferred €220K after CEO voice clone Public record
27% click rate: AI-targeted accounting firm campaign (2026) Brightside AI
MFA blocks 99.2% of account compromise attempts Microsoft Research

The Four-Step Defense Protocol for a 10-Person Firm

You don't need an IT department to implement this. You need one team meeting, one policy change, one software setting, and one conversation with your internet provider or IT vendor. Here's exactly what to do.

  1. Wire Transfer Verbal Confirmation Protocol. Any request to move money — wire transfer, ACH, change of banking information — requires a callback on a known, pre-existing phone number before execution. Not the number that called you. Not the number in the email. The number in your contacts, or on the organization's official website. One rule. Non-negotiable. This single protocol stops the majority of voice clone and BEC attacks before they succeed. A legitimate partner or client will understand. A fraudster will not be reachable at the real number.

  2. Multi-Factor Authentication on All Firm Email. MFA requires a second verification — a code texted to your phone, or a push notification to an authenticator app — before anyone can log into firm email accounts. Microsoft Research found MFA blocks 99.2% of account compromise attempts. If your firm email runs on Microsoft 365 or Google Workspace, MFA is a settings change, not a technology purchase. Turn it on. Make it mandatory. This is non-negotiable.

  3. Run a Voice Clone Demonstration with Your Team. This step is about making the threat real, not theoretical. Record a 30-second audio clip of a partner — a bio recording, a meeting intro, anything available. Use one of the free voice synthesis tools to generate a short phrase in their voice. Play it back in your next team meeting. Ask the room: "Would you have recognized that as fake?" That 60-second exercise changes how your staff approaches unexpected requests from familiar voices permanently. The goal is not to train them to detect fakes in real time — they can't. The goal is to make them understand why the callback protocol exists.

  4. DNS Filtering — Block Phishing Domains Before Staff Can Click. DNS filtering routes your office's internet traffic through a service that blocks known malicious domains at the network level. When an AI-generated phishing email links to a lookalike domain (your-firm-portal-login.com instead of your actual portal), DNS filtering stops the page from loading before the employee sees it. Services like Cisco Umbrella or Cloudflare Gateway offer small business plans starting under $5 per user per month. Your internet provider or IT vendor can configure this in under an hour. It runs invisibly once deployed.


Frequently Asked Questions

How are AI social engineering attacks targeting law and accounting firms?

AI social engineering attacks on professional services firms use three main methods: voice cloning (an AI replicates a partner's or client's voice from publicly available audio), AI-generated spear phishing emails (using firm-specific data like state bar registration numbers, EINs, or recent filings to appear legitimate), and deepfake video impersonation. Brightside AI documented a 2026 campaign that targeted 800 accounting firms specifically, using AI-generated emails referencing state registration numbers — and achieved a 27% click rate, versus a 2–3% industry average.

How much money have professional services firms lost to AI fraud?

AI-enabled fraud surged 1,210% in 2025 (Brightside AI). Voice cloning fraud rose 680% year-over-year. Average deepfake fraud loss for small businesses exceeds $500,000; large enterprises average $680,000 per incident (Brightside AI). In the highest-profile documented cases, Arup lost $25 million in a single deepfake incident, and a UK energy firm transferred €220,000 after receiving a voice-cloned call from a fraudster impersonating the CEO.

What is vishing and how does AI make it more dangerous?

Vishing — voice phishing — is a fraud technique in which an attacker calls a target and impersonates a trusted person to extract money, credentials, or sensitive information. AI makes vishing dramatically more dangerous because voice cloning software can now replicate a specific person's voice from as little as 10 seconds of audio. The resulting call sounds identical to the real person. An employee who would ignore a generic robocall will often comply with what sounds like their managing partner calling from the road and asking for an urgent wire transfer.

What should a 10-person firm do to defend against AI social engineering?

A 10-person professional services firm without an IT department can implement four concrete defenses: (1) A wire transfer verbal confirmation protocol — any wire transfer request requires a callback on a known, pre-existing phone number before execution. (2) Multi-factor authentication (MFA) on all firm email — Microsoft Research found MFA blocks 99.2% of account compromise attempts. (3) A staff training exercise: record a 30-second audio clip of a partner and play it back in a team meeting to demonstrate that voice can be faked. (4) DNS filtering, which blocks AI-generated phishing domains at the network level before an employee can click.

How can I tell if a voice call is AI-generated?

Current AI voice clones are often indistinguishable from real voices on a phone call — that is the core of why this threat is serious. The correct defense is not trying to detect fakes in real time; it is process-based. Establish a callback protocol: any request involving money, credentials, or sensitive data requires you to hang up and call the person back on their known number. A genuine partner will understand. A fraudster will not be reachable at that number.

What is AI business email compromise (BEC)?

AI business email compromise (BEC) is a fraud technique in which attackers use AI to craft highly personalized phishing emails that impersonate a known person — a client, a partner, a vendor — and request urgent action such as a wire transfer or credential submission. Unlike old-style phishing, AI-generated BEC emails reference real firm-specific details and contain no obvious grammar errors. Brightside AI documented a 2026 campaign targeting 800 accounting firms that achieved a 27% click rate using exactly this method.


The One Thing to Do This Week

Before you read another article on AI, before you schedule a security audit, before you do anything else: this week, send an email to everyone on your team who processes wire transfers or handles banking change requests.

The email says one thing: starting immediately, any request to transfer funds or change banking information — regardless of who it appears to come from, regardless of how urgent it seems — requires a phone callback to that person on their known number before any action is taken.

That's it. One email. One policy. No technology required.

Every other step in this guide builds on top of that one. But that one alone, implemented today, closes the gap that the 800 accounting firms in Brightside AI's documented campaign did not have.


Sources

  • Brightside AI: 2026 data on AI-enabled fraud and accounting firm targeting campaign
  • Journal of Accountancy: "AI risks CPAs should know" (February 2026)
  • Journal of Accountancy: "Elder fraud rises as scammers use AI" (April 2026)
  • CPA Practice Advisor: "AI Social Media Scams Are Coming for Your Accounting Firm" (March 25, 2026)
  • Microsoft Research: MFA effectiveness data (99.2% account compromise block rate)
  • SecurityWeek: Cyber Insights 2026 — Social Engineering

For related guidance on protecting your firm's data and clients, see AI data security for professional services firms, building an AI policy for your firm, and how to get your team using AI safely.

Frequently Asked Questions

How are AI social engineering attacks targeting law and accounting firms?

AI social engineering attacks on professional services firms use three main methods: voice cloning (an AI replicates a partner's or client's voice from publicly available audio), AI-generated spear phishing emails (using firm-specific data like state bar registration numbers, EINs, or recent filings to appear legitimate), and deepfake video impersonation. Brightside AI documented a 2026 campaign that targeted 800 accounting firms specifically, using AI-generated emails referencing state registration numbers — and achieved a 27% click rate, versus a 2–3% industry average.

How much money have professional services firms lost to AI fraud?

AI-enabled fraud surged 1,210% in 2025 (Brightside AI). Voice cloning fraud rose 680% year-over-year. Average deepfake fraud loss for small businesses exceeds $500,000; large enterprises average $680,000 per incident (Brightside AI). In the highest-profile documented cases, Arup lost $25 million in a single deepfake incident, and a UK energy firm transferred €220,000 after receiving a voice-cloned call from a fraudster impersonating the CEO. Professional services firms are not exempt from these numbers — they are increasingly the target.

What is vishing and how does AI make it more dangerous?

Vishing — voice phishing — is a fraud technique in which an attacker calls a target and impersonates a trusted person to extract money, credentials, or sensitive information. AI makes vishing dramatically more dangerous because voice cloning software can now replicate a specific person's voice from as little as 10 seconds of audio — a partner's firm biography recording, a conference presentation, a voicemail. The resulting call sounds identical to the real person. An employee who would ignore a generic robocall will often comply with what sounds like their managing partner calling from the road and asking for an urgent wire transfer.

What should a 10-person firm do to defend against AI social engineering?

A 10-person professional services firm without an IT department can implement four concrete defenses: (1) A wire transfer verbal confirmation protocol — any wire transfer request, regardless of who it appears to come from, requires a callback on a known, pre-existing phone number before execution. This single rule blocks the majority of voice clone and BEC attacks. (2) Multi-factor authentication (MFA) on all firm email accounts — Microsoft Research found MFA blocks 99.2% of account compromise attempts. (3) A staff training exercise: record a 30-second audio clip of a partner and play it back in a team meeting to demonstrate that voice can be faked. (4) DNS filtering, which blocks AI-generated phishing domains at the network level before an employee can click.

How can I tell if a voice call is AI-generated?

Current AI voice clones are often indistinguishable from real voices on a phone call — that is the core of why this threat is serious. The correct defense is not trying to detect fakes in real time; it is process-based. Establish a callback protocol: any request involving money, credentials, or sensitive data requires you to hang up and call the person back on their known number (not the one that just called you). A genuine partner will understand. A fraudster will not be reachable at that number.

What is AI business email compromise (BEC)?

AI business email compromise (BEC) is a fraud technique in which attackers use AI to craft highly personalized phishing emails that impersonate a known person — a client, a partner, a vendor — and request urgent action such as a wire transfer or credential submission. Unlike old-style phishing, AI-generated BEC emails reference real firm-specific details (state bar numbers, recent filings, mutual client names) and contain no obvious grammar errors. Brightside AI documented a 2026 campaign targeting 800 accounting firms that achieved a 27% click rate using exactly this method.

Get the weekly briefing

AI adoption intelligence for accounting, law, and consulting firms. Free to start.

Related Reading

This is the kind of intelligence premium subscribers get every week.

Deep analysis, cross-sector patterns, and the frameworks that help professional services firms make the crossing.