Agent Washing: What Professional Services Firms Need to Know Before They Call Their AI 'Agentic'

April 29, 202611 min readBy The Crossing Report

Agent Washing: What Professional Services Firms Need to Know Before They Call Their AI "Agentic"

Picture a consulting firm owner's proposal deck. One slide reads: "Our AI agent automatically analyzes your data and surfaces strategic insights — no human required." The client signs. Six months in, the client's general counsel reviews the engagement and asks for documentation of how the agent actually works. What the firm describes is a workflow where a team member runs a prompt, reviews the output, edits it, and packages it for the client. Every single time.

That's agent washing. And as of April 2026, it's the disclosure liability category that Harvard Law School, Debevoise & Plimpton, and Baker McKenzie are all pointing to as the next wave of AI enforcement.

This isn't about hallucination — AI generating incorrect information. That's a different risk. Agent washing is a marketing and disclosure failure: calling something "agentic" when it isn't, and making productivity or capability claims that your actual system can't back up. It's what happens when "AI-powered" became a selling point and "agentic" became the upgrade — before most firms understood what those words create as legal obligations when they appear in a proposal or an engagement letter.

The enforcement timeline is specific. Here's what you need to know.


What Is Agent Washing?

The term emerged formally in two publications within weeks of each other.

On March 25, 2026, Debevoise & Plimpton published an analysis of AI-related disclosure risk in corporate governance contexts. On April 16, 2026, Harvard Law School's Forum on Corporate Governance published a more detailed framework. Both identified the same pattern: firms across industries are marketing AI tools as "agents" or "agentic AI" when the underlying systems are rule-based automations, standard AI assistants with human review at every step, or basic API integrations that trigger predefined responses.

The word "agent" is load-bearing in a way that "AI-assisted" is not. When a firm calls something an agent, it implies a system that can perceive its environment, make decisions, and take actions autonomously — a system that can handle a task from start to finish without continuous human intervention. That is the definition that clients, regulators, and courts are now starting to apply when the word appears in a contract or marketing document.

The problem is that "agentic" became a marketing term before it became a legally examined one. A lot of what got called "AI agents" in 2024 and 2025 was software that ran a prompt, returned an output, and required a human to do something with that output. That's AI-assisted work. It is not an agent in the sense the word now carries.

Agent washing is the gap between what the word claims and what the system delivers.

One important distinction: agent washing is not the same risk as AI hallucination. Hallucination is a technical failure — the AI produces incorrect information. Agent washing is a disclosure failure — the firm overstated what the AI can do independently. A system can generate accurate outputs and still be agent washing if the firm marketed it as autonomous when it required human review at every step. These are separate risks that require separate audits.


Why It Matters Now — the Enforcement Timeline

The pattern Harvard Law and Debevoise describe isn't new. It's the second wave.

The first wave was AI washing: companies claiming in investor communications and marketing materials that they used AI when they used little to none. The SEC brought enforcement actions in 2024 and 2025. Two investment advisers — Delphia and Global Predictions — were sanctioned specifically for false or misleading AI-related claims in their marketing. The enforcement message was clear: "We use AI" as an undifferentiated claim, without documentation of what that actually meant, created securities fraud exposure.

Agent washing is AI washing with a more specific vocabulary. The claim is no longer just "we use AI" — it's "our AI acts as an agent." The exposure mechanism is the same: a material claim about a capability that can't be documented when examined.

Baker McKenzie's April 23, 2026 analysis, covered by Fortune, projects enforcement acceleration specifically targeting agentic AI claims in Q3 and Q4 2026. Three parallel exposure tracks are in play:

  1. Securities disclosure — for any firm that has made agentic AI claims in investor materials, fund documents, or regulatory filings
  2. Client contract warranty claims — for any firm whose engagement letters or proposals include AI capability language that the actual system can't deliver
  3. Professional liability and E&O — for consulting, marketing agency, and law firms whose client deliverables were premised on agentic AI performance that didn't materialize

Exposure tracks two and three are where most professional services firms sit. You may not have investors or securities filings. But if your proposals say your AI "handles" tasks that your team is actually reviewing, you have an exposure that belongs in this category.


How Agent Washing Happens in Professional Services

It rarely starts as a calculated deception. It starts as enthusiasm about a tool that genuinely saves time — and marketing language that doesn't keep pace with how the technology actually works in practice.

The most common patterns in professional services firms:

The "AI agent" in the proposal that's really a workflow with human review at every step. A marketing agency tells a prospective client that its AI agent "manages campaign optimization automatically." What actually happens: a team member runs an AI analysis of campaign data, reviews the recommendations, decides which to implement, and makes the changes. The AI saved four hours. It did not manage the campaign autonomously.

The document review tool marketed as "agentic AI" without qualification. A firm tells a client that its AI agent reviews contracts for risk exposure. The AI flags issues and drafts summaries. A team member reviews every flag and decides what's material. The AI is genuinely useful. It is also not autonomous — and "agentic" in an engagement letter without a description of the human review step creates a warranty the firm may not be able to meet.

AI productivity projections that assume autonomous operation the system can't deliver. A pitch deck includes a slide showing "40% reduction in delivery time" from AI agents. The projection was based on a vendor case study that assumed the AI operated without human review. The firm's actual workflow requires team review of AI output before delivery. The number in the deck is aspirational. In a proposal, it may be a warranty.

Claiming the AI "handles" a task when a human is in the loop at every decision point. The AI is doing meaningful work — faster, more consistent, more scalable than manual processing. But if you've told a client it handles the task, and the client later discovers a human reviews every output, you've created a credibility problem at minimum and a warranty claim at the top end.


The Harvard/Debevoise Framework — 3 Questions Before You Use the Word "Agentic"

Both the Harvard Law Forum and Debevoise analyses point to the same practical test. Before using the word "agentic" in any client-facing or investor-facing context, answer three questions:

1. What does the system actually do autonomously, without human review?

Not what it could theoretically do. Not what the vendor demos. What your system, in your specific deployment, actually does from start to finish without a human deciding the output is acceptable before it moves forward.

2. Can you demonstrate that the claimed capability is real in your specific deployment?

Vendor benchmarks are not your documentation. A demo environment is not your production environment. If you can't show the capability from your own operational data, you cannot make the claim in a client proposal without creating exposure.

3. Are the productivity or accuracy claims in your materials verifiable from your own data?

If a proposal says AI cuts processing time by 35%, that figure needs to come from your actual workflow — documented, with a methodology you could explain to a client's counsel. A vendor's published case study from a different firm in a different operational context is not your documentation.

If you cannot answer all three questions for a specific claim in a specific document, the word "agentic" in that document creates liability you haven't earned the right to claim.


What Firm Owners Should Audit Right Now

The audit is not complicated. It covers four places where agent washing language most commonly appears:

Your website and marketing copy. Anything that describes AI-assisted services as autonomous, self-managing, or agentic. The question isn't whether the language sounds impressive — it's whether it accurately describes what your system does without human intervention.

Client proposals and pitch decks. Any slide or section that includes an AI productivity projection, a claim that AI "handles" or "manages" a task, or the words "autonomous," "agentic," or "AI agent" in describing your service delivery. These are the first places opposing counsel will look if a client files a warranty claim.

Engagement letters and service agreements. If your engagement letter includes any representation about AI capabilities — accuracy rates, processing speed, autonomy levels — those representations may function as warranties. Review them against what your actual system delivers.

Client presentations that included AI capability projections. If you told a client in a 2024 or 2025 presentation that your AI would achieve certain outcomes based on agentic operation, review those presentations against actual performance. Warranty claims don't necessarily get filed at contract signing — they get filed when expectations aren't met and the client's counsel starts reviewing what was promised.

For consulting and marketing agency owners who help clients with their own marketing: review any AI capability language in their investor materials or product marketing before you help them renew or republish those documents. If you helped draft agent claims for a client that aren't accurate to their actual system, your firm may share the exposure.


The Line That Separates Defensible Claims from Exposure

The line is not between "uses AI" and "doesn't use AI." It's between claims that describe what actually happens in your workflow and claims that describe aspirational capability.

Defensible:

  • "We use AI tools in [specific workflow], with [named role] review of every output before delivery"
  • "Our AI-assisted process reduces [specific task] from [X hours] to [Y hours] — here is how it works"
  • "We use [named tool] to [specific function]; our team reviews every output and is responsible for the final deliverable"

Exposure:

  • "Our AI agent handles [task] autonomously" — unless you can document exactly what "autonomously" means in your deployment, with human review steps disclosed
  • "AI-powered services that reduce [time/cost] by [X%]" — unless that figure comes from your documented operational data, not a vendor case study
  • "Agentic AI" as a general descriptor of your firm's technology posture, without specifying what the agent actually does and does not do independently

The pattern from AI washing enforcement holds here: the issue is not whether you use AI. The issue is whether the claims you make about your AI are verifiable from your actual system. Enforcement is not targeting firms that use AI and describe its limitations honestly. It's targeting claims that can't be supported when examined.


Stop Trying to Win on the Word "Agentic"

The firms that come out of Q3 and Q4 of this year in the best position are not the ones with the most impressive AI vocabulary in their proposals. They're the ones whose language is accurate — whose proposals describe what actually happens in their workflow, who name the human review steps, and who can show a client documentation of what their AI does and does not do independently.

"AI-assisted, human-verified" is not a weak positioning. It's a defensible one. "Agentic AI that autonomously manages your workflow" is a strong positioning that creates obligations your system may not be able to meet.

As enforcement accelerates in the back half of 2026, firms whose marketing language is precise will be in a categorically different position than firms who marketed aggressively and now have to walk back claims to clients and their counsel.

Precision is not a liability hedge. For most professional services firms, it's also the more accurate description of how their AI actually works — which means it's the more honest sale. That matters when the client's GC starts reviewing the engagement letter.


One Step to Take This Week

Open your last three client proposals. Find every instance of the words "autonomous," "agentic," "AI agent," "AI handles," or any AI productivity projection with a percentage attached.

For each one: can you document, from your own operational data, that the claim is accurate? Can you show what the system does without human intervention?

If you can: keep the language and add one sentence describing the workflow — what the AI does, and where your team reviews the output. That sentence makes the claim defensible.

If you cannot: revise the language before those proposals become engagement letters. "AI-assisted, with team review of every output" is the phrase that replaces what you can't document. It's accurate. And in Q3 and Q4 2026, accurate is what you want on record.


The Crossing Report covers AI risk and strategy for professional services firm owners every week. For more on AI disclosure requirements and compliance frameworks, see our coverage on AI regulation and compliance for professional services firms and AI disclosure policy for professional services. For the parallel risk when AI output goes wrong in practice — including what happened when AI hallucinations reached OpenAI's own law firm — read the Sullivan & Cromwell case analysis. Subscribe to The Crossing Report for weekly intelligence on the AI transformation reshaping professional services.

Frequently Asked Questions

What is agent washing?

Agent washing is the practice of overstating the autonomy or capability of AI tools by calling them 'agents' when they function as rule-based automation, simple API calls, or standard AI assistants. The term was identified by Harvard Law School's Forum on Corporate Governance (April 16, 2026) and Debevoise & Plimpton (March 25, 2026) as the next category of AI-related disclosure liability following the AI washing enforcement wave of 2024-2025.

Does agent washing apply to small professional services firms, or only public companies?

Agent washing exposure applies to any firm making material claims about AI capabilities: in client proposals, engagement letters, website marketing, or client presentations. The securities enforcement track is primarily relevant to public companies and investment funds. But consulting, marketing agency, and law firms face exposure through client warranty claims and professional liability if AI capabilities are materially overstated in service agreements.

What AI marketing language is considered agent washing?

High-risk language includes: claiming that AI 'autonomously handles' tasks that are actually reviewed by humans; including AI productivity projections without disclosing that estimates assume autonomous operation the system cannot deliver; using 'agentic' to describe rule-based automation or a standard AI assistant; tying revenue or efficiency claims to AI capability projections without documented evidence from your actual deployment.

What should professional services firms say instead of 'agentic AI'?

Accurate alternatives include: 'AI-assisted [task], with [role] review of every output'; 'We use [named tool] in [specific workflow] — [specific what it does] is automated, [specific what it does not do] is reviewed by our team'; or 'AI-powered, human-verified.' The key is describing the actual workflow, not the aspirational capability.

When is agent washing enforcement expected to accelerate?

Baker McKenzie's April 23, 2026 analysis (Fortune) projects enforcement acceleration in Q3-Q4 2026, targeting inflated AI agent claims in securities filings, investor communications, and client-facing materials. Professional services firms most at risk are those in consulting and marketing agencies that pitched agentic AI capabilities during 2024-2026 that exceed what their actual systems deliver.

Get the weekly briefing

AI adoption intelligence for accounting, law, and consulting firms. Free to start.

Related Reading

This is the kind of intelligence premium subscribers get every week.

Deep analysis, cross-sector patterns, and the frameworks that help professional services firms make the crossing.