Tennessee Just Defined AI's Legal Status — And It Has a Message for Your Firm

April 30, 20268 min readBy The Crossing Report

Tennessee Just Defined AI's Legal Status — And It Has a Message for Your Firm

Published April 2026 | The Crossing Co


In April 2026, Tennessee enacted SB 837 — a law that explicitly states AI, computer algorithms, software programs, and machines cannot be a "person" under Tennessee state law. It passed alongside SB 1580 (banning AI from posing as mental health professionals) as part of a broader AI governance package moving through state legislatures this year.

The direct legal consequence in Tennessee: AI cannot be sued. It cannot hold a professional license. It cannot sign a contract. It cannot bear legal liability. If AI prepares a tax return and it's wrong, the CPA is responsible. If AI drafts a contract and it misses a clause, the lawyer is responsible. AI is a tool — and the firm owns everything the tool produces.

For professional services firm owners in every state, not just Tennessee, this is the judicial frame being established across the country. The message is simple: you own everything the AI does.

This is not a Tennessee compliance issue. It's a national reality check.


What Tennessee SB 837 Actually Says

SB 837 is a definitional law, not a compliance mandate. It doesn't require firms to take any specific action. What it does is remove any ambiguity about AI's legal standing under Tennessee's legal framework.

The law explicitly excludes AI, algorithms, and software programs from the definition of "person" under Tennessee Code Annotated. That exclusion has one practical consequence that matters to every professional services firm: there is no "the AI hallucinated" defense in a professional liability claim.

When your client suffers damages because an AI-assisted deliverable was wrong — a tax filing, a contract, a strategic recommendation, a job placement — there is no entity called "the AI" to absorb the liability. The liability lives where it always has: with the professional who delivered the work.

Courts in every state are applying this frame, whether or not they've passed explicit statutes. Tennessee just put it in writing.


The Vendor Indemnification Misconception

Here's the belief that keeps a lot of firm owners from taking AI liability seriously: "If AI makes an error, my vendor's terms cover my liability."

It's understandable. AI vendors have legal teams. Their contracts are dense. Somewhere in there, there's indemnification language. That has to mean something, right?

It does — but not what you think.

Vendor indemnification clauses typically protect the vendor's customers from claims related to the vendor's product itself. If the AI tool infringes on a copyright, leaks your data, or malfunctions in a way that damages your firm's operations, vendor indemnification may be relevant.

What it doesn't cover: professional malpractice claims.

When your client sues you for delivering negligent work — work that happened to be AI-assisted — they're not suing the AI company for building a bad tool. They're suing you for delivering work below professional standards. That's a malpractice claim. It runs against your license, your firm, and your professional liability policy — not against the AI vendor's product liability coverage.

Three things no vendor indemnification clause protects you from:

  • Malpractice claims from clients who received deficient work product
  • Bar complaints or license actions from your professional regulatory body
  • Client fee disputes where the client refuses to pay for work they claim was inadequate

Your malpractice insurer doesn't care whether you used ChatGPT, Harvey, or a $5 spreadsheet to produce the work. They care who signed off.


What Your Engagement Letter Needs to Say Now

The single most underused risk management tool in professional services is the engagement letter — and most firms haven't updated theirs to reflect AI use.

Here's why this matters: an engagement letter with explicit AI disclosure changes the informed consent frame. A client who understands that AI tools are used in their work has different expectations than a client who assumes all work is purely manual and human-reviewed. When something goes wrong with the former, you're in a very different position than with the latter.

Your engagement letter should now include four elements:

1. AI use disclosure State that the firm uses AI tools in the delivery of services. You don't need to name specific tools (they change too frequently — instead, reference a separate AI Use Policy you can update without re-executing letters).

2. Human review protocol State explicitly that all AI-assisted work is reviewed and validated by a licensed professional before delivery to the client. This is your professional judgment assertion — the most important phrase in a malpractice defense.

3. Data protection commitment Clients are worried about where their financial records, contracts, and business information go when they flow through an AI tool. Your letter should state that client data is only processed through AI tools with appropriate data processing agreements, and that client-identifiable information is not uploaded to public AI tools.

4. Scope of AI use If you want to limit AI use to specific tasks (research, drafting, summarization) while preserving purely human judgment for others (final review, client recommendations, professional sign-off), state that distinction. It becomes your standard of care on paper.

This is not legal advice — have your attorney review language before you use it. But firms that haven't revisited engagement letters since 2023 are operating with documentation that doesn't reflect how their work is actually done.


The Insurance Question You Need to Ask This Week

Not all malpractice policies automatically cover AI-assisted work product. This is not theoretical risk — it is a gap that exists in many standard professional liability policies right now, because those policies were written before AI-assisted deliverables became standard practice.

Some standard policies have "inherent limitations" clauses or exclusions tied to technology errors that may apply when AI is involved. Others simply haven't been updated to address AI as a category. Until you ask, you don't know where you stand.

The exact question to ask your malpractice insurer:

"Does my current policy cover professional liability claims arising from AI-assisted client deliverables? Please confirm in writing."

Two responses to watch for:

"Yes, our policy covers AI-assisted work like any other work product" — Get this in writing. Document the conversation. This is what you want.

"We haven't addressed that yet" or "we're reviewing our position" — This is not a reassurance. This is a signal that you may have uninsured exposure right now. Push for a written coverage position before you deliver another AI-assisted client file.

If your insurer's response makes you uncertain, talk to a broker who specializes in professional liability. The market for AI-specific coverage riders is developing, and firms that have already had the conversation are in a better position than those who discover the gap after a claim.


What Firms in Every Sector Need to Do Before the Next Client Engagement

Tennessee SB 837 is a law about one state's definition of "person." The actions it points to apply everywhere.

Law firms: Review your engagement letters for AI disclosure language. If AI is used in research, drafting, or document review, that should be stated. Confirm your malpractice insurer's written position on AI-assisted work product. And audit which AI tools have access to client matter information — that data governance gap is where the next disciplinary cases will come from.

Accounting firms: Your documentation of the review protocol for AI-assisted returns is your defense in an IRS dispute or client malpractice claim. If your staff uses AI in tax prep, your engagement letter and your file notes should reflect the human review step explicitly. Ask your E&O insurer the same coverage question.

Consulting firms: Any deliverable that includes AI-generated analysis — market research, financial modeling, strategic recommendations — should carry documentation of the professional's review and explicit judgment. The deliverable is still yours. The AI didn't produce it; you did, with an AI tool.

Staffing firms: AI-assisted candidate assessment creates a different but equally serious liability exposure. If your AI screening tools produce outcomes that adversely affect protected classes, the ADA and EEOC frame applies — and the liability runs against your firm, not your screening vendor. This is an area where vendor indemnification is especially likely to be misunderstood as protective.


The Clear Next Step

Before your next client engagement, do two things this week:

  1. Pull up your standard engagement letter. Does it mention AI at any point? If not, schedule 30 minutes with your attorney to draft a single AI disclosure paragraph. It doesn't need to be elaborate — it needs to exist.

  2. Call your malpractice insurer. Ask the specific question above. Get the answer in writing. Put it in your files.

These two actions take less than two hours combined. They represent the difference between a firm that has documented its AI governance and one that hasn't — which is the difference that matters when a claim is filed.

Tennessee SB 837 didn't create this liability. It just clarified what was already true: the AI isn't responsible for what it produces. You are.


The Crossing Report tracks the AI laws, tools, and decisions that professional services firm owners need to know about before their clients ask — one issue every Monday. Subscribe free →

Frequently Asked Questions

Who is legally responsible when AI makes an error in a professional services engagement?

The licensed professional and their firm are responsible. Tennessee SB 837 explicitly states AI cannot be a 'person' under Tennessee law — meaning AI cannot bear legal liability. In every state, malpractice and professional liability claims run against the professional who signed off on the work, not the AI vendor who built the tool.

Does my AI vendor's indemnification clause protect me from malpractice claims?

No — vendor indemnification typically covers damage done to the vendor's customers in the vendor's capacity, not professional malpractice claims against your firm. A client who suffers damages from your AI-assisted work product has a malpractice claim against you, not a product liability claim against your AI vendor.

Do I need to disclose AI use to my clients in my engagement letter?

Increasingly yes. Bar associations in multiple states (including those following ABA Formal Opinion 512) and CPA professional standards are moving toward requiring or strongly encouraging disclosure of AI use in client engagements. Beyond professional standards, disclosure in engagement letters shifts the informed consent frame — clients who understand AI is used in their work have different expectations than those who assume entirely human work product.

Does my malpractice insurance cover AI-assisted work?

It depends on your policy and your insurer. Standard professional liability policies do not universally address AI-assisted work product. Firms using AI in client deliverables should explicitly ask their malpractice insurer: 'Does my current policy cover liability claims arising from AI-assisted work?' If the insurer has not addressed this, request a written coverage confirmation before scaling AI use.

Is Tennessee SB 837 a compliance requirement for my firm?

No — Tennessee SB 837 is not a mandate that requires firms to take any action. It is a definitional law that clarifies AI's legal status in Tennessee. Its significance is not compliance but framing: it confirms the judicial consensus that AI is a tool, not a responsible party. The firm that deployed the tool is the responsible party. This is the frame courts in every state are using to evaluate professional liability claims involving AI.

Get the weekly briefing

AI adoption intelligence for accounting, law, and consulting firms. Free to start.

Related Reading

This is the kind of intelligence premium subscribers get every week.

Deep analysis, cross-sector patterns, and the frameworks that help professional services firms make the crossing.