The States Just Drafted the AI Regulation Template — And It's Coming for Law and Accounting Next
Published March 12, 2026 · By The Crossing Report
Published: March 15, 2026 | By: The Crossing Report | 6 min read
In the week of March 9, 2026, three states advanced bills targeting AI that claims to perform licensed mental health services. Tennessee HB 1470 passed its House Health Committee 20-0. New Hampshire HB 1406 voted "ought to pass as amended." Maine LD 2082 voted "ought to pass as amended." All three bills share an architecture: prohibit AI from representing that it can act as a licensed mental health professional without appropriate human oversight.
If you run a law firm, accounting practice, or consulting firm, none of those bills apply to you directly.
But you should pay close attention, because those bills are the test case for a regulatory template that is already being pointed at your profession.
The Template States Are Using
State AI legislation has converged on a framework that works politically across party lines.
The structure: identify a licensed profession where consumer harm from unlicensed AI is easy to explain and sympathize with, and pass a bill requiring that AI performing that work be supervised by a licensed professional. Mental health AI is the test case because the consumer harm — vulnerable patients receiving AI-generated therapy without knowing it — is easy to explain to a legislator and a jury.
But the same template is already being applied beyond mental health.
New Hampshire SB 640 — which passed its Senate committee the same week — targets AI performing "licensed professional services" broadly. The bill prohibits AI systems from providing services that require a professional license without a licensed professional's "meaningful oversight." That covers law, accounting, tax preparation, financial advice, and management consulting.
Oregon HB 4154 requires any consumer-facing AI interaction — including chatbot intake systems — to disclose that the client is talking to AI, not a human. It adds a private right of action with statutory damages. That covers every professional services firm with an AI-powered website chat, intake form, or automated email response.
The pattern: mental health → professional services → every licensed profession.
The question for a law firm or accounting practice owner is not whether this regulatory wave will reach your state. It is whether you are building the practices now that will satisfy regulators when it does.
What the Mental Health Bills Signal for Your Firm
Three specific signals worth tracking:
The "represents" standard. The mental health bills prohibit AI that represents or advertises that it can perform licensed work. This is a broader standard than "performs licensed work." It means your AI-powered client intake tool that uses language like "our AI legal assistant can help you with your situation" could be read as representing that it performs licensed legal services. Audit how your client-facing AI tools are described — on your website, in your intake forms, and in any marketing language that describes what your AI can do.
The bipartisan consensus. Tennessee HB 1470 passed its committee 20-0. Not 18-2. Not 15-5. 20-0. This is not a partisan issue. Legislators on both sides of the aisle are comfortable restricting AI that performs licensed professional work without oversight. Bills in this category are not going to get stuck in committee in most states.
The "meaningful oversight" standard is undefined — by design. New Hampshire SB 640's "meaningful oversight" standard is not defined in the bill text. This is a legislative technique that gives regulators and courts maximum flexibility to set the standard through enforcement. For professional services firms, that means the safest compliance posture is not to ask "what's the minimum we can do?" but to ask "what would a regulator consider meaningful?" A licensed professional reviewing AI output before it reaches a client, documented, is the answer to that question in every scenario.
What the Regulatory Calendar Looks Like
The compliance pressure is compressing.
Passed and in effect:
- Oregon HB 4154 (chatbot disclosure, private right of action): passed legislature, awaiting Governor's signature as of March 2026
- Washington HB 1170 (AI transparency): signed March 13, 2026
- Texas TRAIGA: effective January 1, 2026
Passed committee, floor vote pending:
- New Hampshire SB 640 (professional services AI oversight)
- New Hampshire HB 1406 (mental health AI)
- Maine LD 2082 (mental health AI)
Coming deadlines:
- Colorado CPAIA (Consumer Protections for Artificial Intelligence Act): June 30, 2026
- Illinois HB 3773 (professional AI oversight): advancing
If you have clients in any of these states — or if your firm is headquartered there — the regulatory environment is not theoretical. The question is whether your current practice meets the minimum standard these bills are moving toward.
Three Actions to Take Before This Reaches Your State
1. Audit your client-facing AI language. Walk through every place your firm uses AI in a client interaction: website chat, intake forms, automated email responses, AI-generated meeting summaries sent to clients. For each one, ask: does this language describe what the AI does in a way that could be read as "representing" it performs licensed professional work? If yes, update the language to be descriptive ("we use AI to draft preliminary responses, which are reviewed by a licensed [attorney/CPA/consultant] before being sent to you") rather than promotional.
2. Document your oversight process for AI outputs. Before the regulations require it, build the documentation habit. Create a simple log — even a shared spreadsheet — where the person who reviews AI output for a client matter records the date, the matter, what was reviewed, and what changes they made. If you are ever subject to a regulatory inquiry or malpractice claim involving AI, this log is the difference between "we have no idea whether anyone reviewed this" and "our documentation shows attorney Sarah reviewed this analysis on March 10 and made three revisions before it was sent."
3. Add an AI disclosure clause to your engagement letter. Two sentences: "We use AI tools in our work, supervised by licensed professionals. All AI-generated content is reviewed and approved by [name/title] before delivery to clients." This satisfies the disclosure requirements that Oregon HB 4154 and similar bills require. It also sets client expectations and positions your AI use as a transparency asset rather than something you're hiding.
The mental health AI bills are not about your firm. The professional services AI bills are. And the same legislators who passed those mental health bills 20-0 are already drafting the next version.
The time to build the oversight practices that will satisfy those regulations is now — when it's a competitive differentiator, not a compliance requirement.
Related reading:
Frequently Asked Questions
What are the mental health AI bills and why do they matter for professional services firms?
In March 2026, Tennessee, New Hampshire, and Maine advanced bills prohibiting AI from representing that it can perform the work of a licensed mental health professional. Tennessee HB 1470 passed the House Health Committee 20-0. New Hampshire HB 1406 voted 'ought to pass as amended.' Maine LD 2082 voted 'ought to pass as amended.' The bills aren't primarily important because they cover mental health — they matter because they reveal the legislative template states are using. The pattern: identify a licensed profession, find a consumer harm, target AI that 'represents' or 'advertises' its ability to perform that work without a licensed professional's oversight. That same template is already being applied to law (NH SB 640), accounting, and professional services broadly.
Does my law firm or accounting practice need to worry about mental health AI laws?
Not directly — but the pattern does. State legislators have discovered a bipartisan framework that works: protect consumers from unlicensed AI. The bills that have advanced in 2026 — NH SB 640 (professional services), NH HB 1406 (mental health), TN HB 1470 (mental health), OR HB 4154 (chatbot disclosure) — all use variations of the same architecture. If you practice law, accounting, or consulting in any state where these bills are advancing, the regulatory environment is moving toward requiring that a licensed professional stand behind every AI output delivered to a client. The time to build that practice is now, before the requirement is codified.
What does 'meaningful oversight' mean in the context of these AI bills?
'Meaningful oversight' is the phrase used in NH SB 640, which prohibits AI from providing licensed professional services without a licensed professional's meaningful oversight. The phrase is deliberately not defined — which means its meaning will be set by enforcement actions and litigation, not by the statute. The conservative interpretation (and the safest compliance posture) is: a licensed professional must review AI output before it reaches a client, must be able to explain how they evaluated it, and must document that review. Firms that treat AI output as a first draft requiring human judgment before delivery are almost certainly in the safe zone. Firms that pass AI output directly to clients with no review are not.
Which states are moving fastest on AI regulation for professional services?
As of March 2026, the most active states are New Hampshire (SB 640, professional services AI oversight — passed committee), Oregon (HB 4154, chatbot private right of action — passed legislature), Washington (HB 1170, AI transparency law — passed March 13, 2026), and Illinois (HB 3773, professional AI oversight — advancing). Colorado's Consumer Protections for Artificial Intelligence Act (CPAIA) takes effect June 30, 2026. California's AI legislation continues to advance in multiple tracks. For a professional services firm with multi-state clients: the safest approach is to comply with the strictest current requirement in any state where you have clients, because the direction of travel is clear even if the specific rules differ.
What's the minimum compliance posture for a professional services firm in 2026?
Three requirements that satisfy the emerging regulatory template across states: (1) A licensed professional reviews all AI-generated work product before it reaches a client. No exceptions. (2) Your engagement letter or service agreement discloses that AI tools are used in your work, and describes the oversight process. (3) You can produce a log showing that a licensed professional reviewed and approved AI output for any client matter — in the event of a malpractice claim, licensing board complaint, or regulatory inquiry. These three requirements don't depend on any specific state's law being in force. They reflect the minimum that regulators across every state are signaling they expect.