Washington HB 1170: The AI Disclosure Law That Will Change What You Send to Clients
Published September 20, 2025 · By The Crossing Report
Published: March 14, 2026 | By: The Crossing Report | 7 min read
Summary
Washington state's HB 1170, passed by the legislature on March 12, 2026, is an AI transparency law aimed at large generative AI providers — not at the small law firm, accounting practice, or staffing agency using their tools. That makes it easy for professional services firm owners to dismiss. Don't. When the ChatGPTs and Copilots of the world comply with HB 1170, the AI-generated content your firm sends to clients will carry embedded disclosure markers. Clients who want to know whether you used AI to produce a deliverable will have a free, legally-mandated tool to find out. The question is whether you told them first — or they found out on their own.
The Law That Just Changed the Transparency Math
Washington HB 1170 cleared the legislature and landed on Governor Bob Ferguson's desk on March 11, 2026. He has 20 days to act. Passage is expected.
The law targets what the bill calls "covered providers" — large generative AI companies with annual revenues over $500 million that offer publicly accessible AI systems to consumers. That's OpenAI (ChatGPT), Anthropic (Claude), Microsoft (Copilot), Google (Gemini). Not your firm.
Covered providers under HB 1170 must do three things:
Detection tool: Offer a free, publicly accessible tool that allows anyone to check whether a piece of content was generated or materially altered by their AI system.
Manifest disclosure: Give users the option to add a visible, difficult-to-remove label identifying content as AI-generated — one that's hard to strip out before sharing.
Latent disclosure: Automatically embed metadata in AI-generated content — including the provider's name and AI system used — that isn't visible to the naked eye but is detectable by the compliant detection tool.
The bill's sponsor, Rep. Clyde Shavers, developed it in partnership with the Transparency Coalition AI (TCAI). The Transparency Coalition has been working this issue in multiple state legislatures. Washington is not a one-off.
The Part That Affects Your Firm
Here's where professional services firm owners need to slow down and read carefully.
HB 1170 does not create a compliance checklist for your firm. No registration, no audit requirement, no safe harbor to claim. The law puts obligations on the AI providers, not on you.
But consider what happens when those providers comply:
You use Claude to draft a client research memo. Claude is required to embed a latent disclosure in that document. Your client receives the memo, runs it through the free HB 1170 detection tool, and sees: AI-generated content. Provider: Anthropic. System: Claude.
You use Microsoft Copilot to prepare a business proposal. Copilot embeds the provenance data. Your client checks. They see it.
Your staffing firm uses an AI tool to generate a candidate summary for a client. The summary has embedded AI provenance. Your client — an HR director at a 200-person company — runs the detection tool out of curiosity.
In each scenario, the law didn't create your disclosure obligation. But it created the technical mechanism for your clients to discover what you didn't disclose.
That's a different kind of problem — and it's coming to every professional services firm that uses covered AI tools in client-facing work.
Three Sectors, Three Scenarios
Law Firms
The Washington State Bar Association already issued Advisory Opinion 2025-05 on AI use in legal practice. The WSBA's guidance points in the same direction as HB 1170: clients have a right to know when AI is used in their matters.
HB 1170 doesn't change that ethics analysis, but it changes the practical risk. A client who suspects their matter was handled with heavy AI assistance — and who has a free tool to check AI provenance in any document you sent them — now has a mechanism to confirm or deny that suspicion.
The riskiest scenario isn't the partner who uses AI transparently and discloses. It's the associate who used AI to draft a brief, the partner who didn't catch it and didn't disclose, and the client who found out through a detection tool scan two weeks after the matter closed.
What to do: Update engagement letters to disclose AI-assisted work product and clarify your review process. Not because HB 1170 requires you to — because your clients now have a tool to find what you didn't say.
Staffing Firms
Staffing agencies already face a layered AI disclosure environment. FCRA requires disclosure when consumer reports are used in adverse employment actions. New York City's Local Law 144 mandates bias audits and candidate notice when AI tools are used in hiring decisions. Illinois law creates similar requirements.
HB 1170 adds a new layer: AI-generated candidate summaries, screening reports, and placement recommendations will carry embedded provenance data. An employer-client who uses the detection tool on a candidate package your firm sent them can see whether that package was AI-generated.
For most staffing firms, this isn't a legal crisis — it's a trust conversation you haven't had yet. Clients who assumed your recruiters wrote those summaries may feel differently learning AI produced them. Getting ahead of that conversation — "here's how we use AI in our process, here's how we ensure quality" — is far easier before the client discovers it on their own.
What to do: Audit which AI tools are used in candidate screening and deliverable preparation. Add a one-paragraph AI use disclosure to client service agreements. Brief your recruiters on how to talk about AI use with clients.
Accounting and Consulting Firms
AI-generated research summaries, financial model commentary, industry analysis memos, proposal documents — these are the deliverables where AI assistance is most prevalent in accounting and consulting firms right now.
HB 1170's latent disclosure requirement creates what amounts to an audit trail for AI use in your client work. You may not be required to maintain it. But your clients will be able to access it.
For firms serving Washington state clients — which, given the broad reach of online services, is most firms — this is a real near-term issue. A client CFO reviewing a memo your firm produced, who runs a detection tool on it as part of a broader AI audit their company is conducting, can surface your AI use in that document.
What to do: Develop a firm-wide policy on AI disclosure in deliverables. It doesn't need to be complicated: a standard line in your deliverables cover page stating that AI-assisted tools are used in research and drafting, subject to professional review, is sufficient. Consult your E&O insurer on recommended language.
The Disclosure Moment Is Coming
HB 1170 is one law in one state. But the pattern it represents is accelerating.
The Texas TRAIGA took effect January 1, 2026. Colorado's AI Act takes effect June 30, 2026. New York SB 7263 is advancing. New Hampshire SB 640 passed committee in March. Oregon HB 4154 creates a private right of action for AI chatbot misrepresentation.
These laws are not coordinated in the sense that they follow a single template. But they are coordinated in the sense that they all point in the same direction: AI use in client-facing professional services must be disclosed, documented, and supervised by a licensed professional.
HB 1170 is the first law that creates a technical mechanism — the mandatory free detection tool — that puts discovery power in the hands of clients and the public, not just regulators. That is a structural shift. Other states will replicate it.
The firms that will navigate this well are not the ones waiting for each new law to create a specific compliance obligation before acting. They're the firms that are building a minimal AI disclosure practice now:
- Know which AI tools your firm uses in client-facing work
- Have a written policy on AI disclosure to clients
- Update engagement agreements and deliverable cover pages
- Train staff on how to communicate about AI use
None of this requires a law degree specializing in AI regulation. It requires deciding to do it before a client or a regulator makes it uncomfortable.
The Broader Picture: Three-Law Cluster for Professional Services
Washington HB 1170 completes what you might call the first three-law cluster in state AI regulation for professional services:
- Texas TRAIGA (effective now): Know what AI tools you use, document that you've reviewed them, claim the NIST safe harbor
- New York SB 7263 (advancing): AI can't provide "substantive" professional advice without licensed human involvement
- Washington HB 1170 (pending governor signature): AI-generated content will carry detectable disclosure markers
Individually, each law requires different things from different actors. Together, they point to the same practice: document your AI use, disclose it to clients, and ensure professional review of AI output before it reaches a client.
A firm that builds that practice once — an AI tool inventory, a disclosure policy, an engagement letter clause, a designated person responsible — is covered across all three laws and every subsequent one that follows the same direction.
What to Do This Week
Inventory your AI-assisted deliverables. What does your firm send to clients that was drafted with AI assistance? Identify the three most common deliverable types. That's your disclosure priority list.
Add one sentence to your next engagement letter. Something like: "We use AI-assisted tools in research, drafting, and analysis. All AI output is reviewed and validated by a licensed [attorney/accountant/consultant] before delivery to you." Run it by your E&O carrier if you're uncertain about wording.
Set up a provenance test. Before HB 1170 detection tools are formally available, you can do a manual check: look at any document your firm sends and ask, "If a client knew AI generated this, would they be surprised?" If the answer is yes, you have a disclosure gap to close.
Washington HB 1170 may never directly create a compliance obligation for your firm. But it has already created the mechanism by which your clients can discover your AI use whether you disclosed it or not. The firms that are fine with that transparency are the ones who've already had the conversation.
Related Reading
- Texas TRAIGA: AI Compliance Checklist for Professional Services Firms
- New York's AI Bill Has a Fatal Flaw — And That's Bad News for Small Law Firms
- Oregon HB 4154: Your Clients Can Now Sue You Over Your AI Chatbot
- The UK AI Copyright Reports Drop March 18 — What US Firms Need to Know
- AI Regulation & Compliance for Professional Services Firms — The Crossing Report's hub for US and global AI regulatory coverage
The Crossing Report covers the transition to AI for professional services firm owners — accounting, law, consulting, staffing, and marketing agencies. Subscribe here for weekly insights on what's changing and exactly what to do next.
Frequently Asked Questions
What is Washington HB 1170 and does it apply to professional services firms directly?
Washington HB 1170 is an AI transparency and disclosure law passed by the Washington legislature in March 2026 and sent to Governor Bob Ferguson for signature. The law directly regulates large generative AI providers — companies with over $500 million in annual revenue that train AI models and offer publicly accessible AI systems. It does not directly impose obligations on professional services firms (law firms, accounting firms, consulting firms, staffing agencies). However, it changes what AI tools produce: those tools will soon be required to embed disclosure markers in AI-generated content, which affects every firm using them to create client deliverables.
What does Washington HB 1170 require AI providers to do?
HB 1170 requires covered AI providers (those with $500M+ revenue offering public generative AI) to: (1) Provide a free detection tool that allows anyone to check whether content was AI-generated; (2) Give users the option to add a 'manifest disclosure' — a visible, difficult-to-remove label identifying content as AI-generated; (3) Embed a 'latent disclosure' in AI-generated content — metadata including the provider's name and AI system used. These requirements affect the tools your firm uses, not your firm directly.
How does Washington HB 1170 affect AI-generated client deliverables?
When HB 1170's requirements take effect, AI tools like ChatGPT, Claude, and Microsoft Copilot will embed latent disclosures — metadata identifying AI provenance — into content they generate. Clients who run HB 1170-compliant detection tools on your deliverables can see whether a document was AI-generated. A research memo, a proposal, a staffing report, a due diligence summary — if it was drafted with a covered AI tool, that fact becomes technically detectable. If your firm hasn't disclosed AI use to clients, that detection gap becomes a trust problem.
What should staffing firms do about AI transparency in hiring given Washington HB 1170?
Staffing firms face a two-layer AI disclosure environment. Existing laws — FCRA, New York City Local Law 144, Illinois AEDT regulations — already require disclosure when AI tools are used in hiring decisions affecting candidates. HB 1170 adds another layer: AI-generated candidate assessments, screening summaries, and placement proposals will carry embedded provenance data. Staffing firms operating in Washington should (1) audit which AI tools are used in candidate screening and placement recommendations, (2) update client engagement agreements to disclose AI-assisted processes, and (3) ensure any AI screening tools used in hiring decisions have undergone bias audits as required under applicable law.
What is the difference between a manifest disclosure and a latent disclosure under HB 1170?
A manifest disclosure is visible: it's a label or notice that a consumer can see, identifying content as AI-generated. It's designed to be difficult to remove so users can't strip it out before sharing. A latent disclosure is embedded: it's metadata or a watermark within the file that isn't visible to the naked eye but can be detected by a compliant detection tool. Your firm's clients can use the free detection tool HB 1170 requires providers to offer and surface the latent disclosure in anything you send them.
Which AI state laws create a 'three-law cluster' for professional services firms in 2026?
Three state AI laws passed between January and March 2026 create overlapping compliance signals for professional services firms: Texas TRAIGA (effective January 1, 2026) — prohibits specific harmful AI uses, requires firms to document their AI tool review, offers NIST AI RMF safe harbor; New York SB 7263 (advancing) — targets AI 'substantive advice' in client-facing contexts; Washington HB 1170 (passed March 2026, pending governor signature) — mandates AI content disclosure by providers, with downstream effects on professional service deliverables. Firms that build one AI governance practice — inventory, vendor review, disclosure policy, documentation — will be positioned for all three.