Grammarly Got Sued for Fake Expert Reviews — The AI Impersonation Risk Every Professional Services Firm Needs to Audit Right Now

Published December 25, 2025 · By The Crossing Report

Published: March 14, 2026 | By: The Crossing Report | 5 min read


Summary

Grammarly disabled its "Expert Review" feature this week after a class action lawsuit was filed by journalist Julia Angwin, who alleged that Grammarly attributed AI-generated feedback to her name — and the names of other journalists and authors including Stephen King — without their knowledge or consent. The lawsuit (No. 26 Civ. 02005-JGK, SDNY) is still active. For professional services firm owners: this isn't just a Grammarly story. The liability pattern at the center of this case — AI outputs attributed to named real professionals — is one that law firms, accounting firms, and consultancies need to audit in their own client-facing AI use right now.


What Grammarly Did

In August 2025, Grammarly launched an "Expert Review" add-on: for $12/month, users received AI-generated writing feedback attributed to named real journalists and published authors. The feature named those individuals as if they had personally reviewed the user's work. They hadn't — and they didn't know Grammarly was using their names.

When the class action was filed in March 2026, Grammarly disabled the feature and apologized. Their apology did not fully acknowledge the lawsuit, a detail reported by Futurism and noted in the complaints.

The legal theory: misappropriation of name and likeness, false advertising, and consumer protection violations. The factual allegation: Grammarly created the false impression that named human experts were reviewing users' work, when AI was generating the output.


Why This Matters for Professional Services Firms

The Grammarly case is a high-profile version of a risk that exists in scaled-down form in professional services firms across the country.

Consider the scenarios:

Law firms: A firm uses Claude or ChatGPT to draft a client memo and sends it under a partner's name without disclosing the AI's role. If the memo contains a material error, the client relied on what they understood to be partner-level expert review — but didn't get it.

Accounting firms: A CPA firm uses AI to generate a tax summary for a client, and the summary is formatted as if it reflects the accountant's professional analysis. If the underlying AI analysis was wrong, the firm has a professional liability problem — and potentially a disclosure violation.

Consulting firms: A consultant sends AI-generated strategic recommendations without disclosure, framed as the firm's expert assessment. If the advice proves harmful, the client's claim isn't just about the bad advice — it's about what the client was led to believe about how the advice was produced.

Marketing agencies: An agency uses AI to produce a creative brief or media analysis, presents it as the agency's proprietary research, and the client pays a premium for what they understand to be human creative judgment. The AI's role is invisible.

In each case, the core problem is the same: the AI output creates an impression of human expert judgment that didn't actually occur. That's the Grammarly fact pattern, at smaller scale.


Three Questions to Audit Your Firm Right Now

1. Does any tool your firm uses attribute AI output to a named person who didn't produce it?

This is the clearest version of the Grammarly risk. If your firm has deployed any AI assistant, chatbot, or communication tool that responds under a named individual's identity — "Hi, I'm [Partner Name]'s assistant" or a firm-branded AI that implies a specific human is behind the output — you have a direct exposure.

Check: any client-facing chatbot, automated email responder, AI-generated report template that shows a named author, or "AI assistant" presented under a partner or employee's name.

2. Do your AI-generated client deliverables create the impression of human expert review they're not receiving?

This is subtler. A client receives a 10-page AI-drafted analysis. It looks like a thorough document. Nothing in the document discloses it was AI-drafted. The client assumes — because it always has been — that the document represents the professional judgment of the person who sent it.

If the AI was the primary author and no substantive expert review occurred, there's a gap between client expectation and actual service delivery. That gap is where professional liability and consumer protection claims grow.

The fix: establish a clear internal policy distinguishing between AI-drafted (requiring explicit disclosure) and AI-assisted (AI-supported draft with substantive human review).

3. Have you updated your client communications and engagement letters for AI use disclosure?

ABA Formal Opinion 512 (2024) establishes that attorneys must disclose AI use when it materially affects how legal services are delivered. Most state bar ethics opinions issued in 2025 and 2026 follow similar logic. AICPA and state CPA boards have parallel professional responsibility frameworks.

The easiest audit: pull your standard engagement letter. Does it mention AI? If a client asked you directly how their work is being done, would your answer include your AI tools? If the answer to the first is no and the second is yes, your engagement letter needs to be updated.


The Legislative Backdrop

The Grammarly lawsuit lands in a specific regulatory moment. Three active legislative efforts directly address the AI impersonation pattern:

  • New York A 6545/S 7263 — would prohibit AI systems from impersonating licensed professionals, including attorneys, without clear disclosure. Currently advancing in the 2026 legislative session.
  • Illinois SB 3601 (Professional AI Oversight Act) — mandatory consumer disclosure when AI is used in professional services delivery. Under consideration in Illinois.
  • FTC Policy Statement (March 2026) — the FTC issued guidance this month on when AI applications violate the FTC Act's prohibition on unfair or deceptive acts. AI-generated professional communications presented as human-crafted may constitute deception under existing consumer protection law.

The legislative direction is consistent: regulators and courts are treating undisclosed AI impersonation of professionals as a consumer protection violation, not just a marketing ethics question.


What to Do This Week

If you're a law firm: Add an AI use clause to your standard engagement letter. The clause should state that your firm uses AI tools in service delivery, that all AI-assisted work is reviewed by licensed attorneys before delivery, and that you will notify clients of material AI use on their matters. If you're not yet ready to specify which tools, you don't have to — but the disclosure must exist.

If you're an accounting or consulting firm: Review any client-facing AI output for the past 90 days. Identify which deliverables were AI-assisted and whether clients received any disclosure. If not, consider a brief policy update you can include in future engagement letters or client onboarding documents.

If you're a staffing or marketing agency: Audit any automated client communication tools — proposal generators, client-facing AI assistants, AI-drafted reports. Verify none of them attribute output to named individuals who didn't produce it. If any do, disable that attribution or add explicit disclosure.


The Broader Pattern

The Grammarly case will settle or litigate over the next 12-18 months. But the pattern it documents is already showing up in regulatory guidance (FTC, ABA), legislative proposals (New York, Illinois), and professional liability claims. The firms that move now — before this is a headline in their practice area — will have a cleaner record and a clearer client relationship when the regulatory moment arrives.

The question isn't whether AI will face disclosure requirements in professional services. It already does. The question is whether your firm's current practice matches those requirements.


Sources: The AI Insider, March 13, 2026 | PRF Law: Grammarly class action details | Futurism, March 2026


Related Reading


The Crossing Report helps professional services firm owners navigate AI adoption with specific, actionable intelligence. Subscribe here.

Frequently Asked Questions

What happened with the Grammarly Expert Review lawsuit?

Grammarly launched an 'Expert Review' feature in August 2025 that attributed AI-generated writing feedback to named real journalists and authors — including Julia Angwin and Stephen King — without their knowledge or consent. A class action was filed in the Southern District of New York (No. 26 Civ. 02005-JGK) by Angwin on behalf of the impersonated experts. Grammarly disabled the feature and apologized in March 2026. The lawsuit remains active.

How does the Grammarly lawsuit affect professional services firms?

The AI impersonation liability pattern — attributing AI output to named real professionals without consent — directly parallels risks facing law firms, accounting firms, and consultancies that use AI in client-facing communications. If your firm's AI assistant appears to represent 'your senior partner's review' or a named expert's opinion, you may be creating the same legal and professional responsibility exposure Grammarly now faces.

What AI disclosure is required for professional services firms?

ABA Formal Opinion 512 (2024) requires attorneys to disclose AI use to clients when it materially affects service delivery. Most state bars have parallel guidance. Accounting firms with AICPA members face similar professional responsibility considerations. Any AI-generated client deliverable — whether a summary, recommendation, or communication — that implies human expert review without that review actually occurring creates disclosure risk.

What New York laws address AI impersonation?

New York A 6545/S 7263, advancing in the 2026 legislative session, would make it illegal for AI systems to impersonate licensed professionals — including lawyers — without clear disclosure. Illinois SB 3601 (Professional AI Oversight Act) requires mandatory consumer disclosure when AI is used in professional services contexts. These bills are direct legislative responses to the same impersonation liability pattern the Grammarly case illustrates.

Can a professional services firm use Grammarly for client communications?

Yes — the Grammarly lawsuit is about the 'Expert Review' feature, not Grammarly's core writing assistance. However, if your firm uses any AI writing or communication tool in client-facing work, you should verify the tool does not attribute output to named real people without consent, and ensure your client communication policy includes AI disclosure language where required by your professional licensing rules.

Get the weekly briefing

AI adoption intelligence for accounting, law, and consulting firms. Free to start.

Free weekly digest. No spam. Unsubscribe anytime.