AI Liability Is Now an Insurance Question — Here's What Your Carrier Is About to Start Asking

Published February 26, 2026 · By The Crossing Report

Published: March 15, 2026 | By: The Crossing Report | 6 min read


Summary

For two years, the AI liability conversation in professional services has been framed as a court compliance issue — citation verification, ABA Opinion 512, state court standing orders. That framing missed a second exposure channel that is now opening: professional liability insurance. As state courts sanction attorneys for AI hallucinations and state legislatures create private rights of action for AI-caused harm, E&O and professional liability carriers are beginning to ask whether your firm has the governance infrastructure to manage AI risk. Here's what they're looking for — and what to have ready at your next renewal.


The Three-Source Convergence

Three things are happening simultaneously that are pushing AI governance onto professional liability insurers' radar:

1. Court sanctions are creating claims precedent. Federal circuit courts and now state appellate courts are sanctioning attorneys for AI-generated filing failures (see: In re Nwaubani, 4th Circuit, March 2026; the March 2026 California state appellate sanction). Each sanctions opinion creates case law that defines what the duty of care looks like in AI-assisted professional work. That case law becomes the standard insurers apply when evaluating whether a covered professional met their obligations.

2. State liability bills are expanding private rights of action. Oregon HB 4154 (passed March 2026) gives individual clients the right to sue firms whose AI systems caused harm through deception — without requiring proof of specific financial damage. Illinois SB 3601 (the Professional AI Oversight Act, pending) would require disclosure of AI use in professional service delivery and creates liability for non-disclosure. These bills create client-initiated claims that go directly to E&O coverage.

3. Wiley Law's March 2026 analysis of state AI bills specifically flagged insurance implications. The analysis noted that state AI liability bills "expand insurance risk exposure for professional services firms beyond traditional E&O coverage" — meaning the coverage questions are not just about whether a policy responds to a claim, but whether firms have the governance controls that keep them within the scope of their coverage.


What Carriers Are Beginning to Ask

The insurance industry's response to AI risk is at an early stage, but the direction is clear from how carriers have handled analogous emerging risks (data breach, cybersecurity, FCPA compliance). The pattern: insurers add governance questions to renewal applications, use the answers to underwrite risk, and eventually exclude or surcharge firms that cannot demonstrate adequate controls.

For AI, the early version of those questions looks like this:

Question 1: Do you have a written AI use policy? An undocumented AI practice is an underwriting red flag for the same reason an undocumented data security practice is: if something goes wrong, there is no documented standard against which to evaluate whether the firm met its duty of care. A written AI use policy — even a one-page document — establishes that the firm made a deliberate choice about how to use AI, rather than allowing an ad hoc free-for-all.

Question 2: Do you verify AI-generated outputs before delivering them to clients? This is the core professional duty question. The legal standard — whether for bar compliance, malpractice, or insurance purposes — is whether a licensed professional exercised professional judgment over the AI's work before it reached the client. An automated AI output that goes directly to a client without human review is the scenario that produces both sanctions and claims. The question insurers are asking is: does your firm have a documented checkpoint that prevents this?

Question 3: Have you trained staff on AI risk? If an employee uses an AI tool in a way that creates a client harm — and there was no training, no policy, no documented expectation — that creates a coverage argument that the firm's supervision was inadequate. Carriers that cover professional services firms are accustomed to asking about supervision and training (for associates, for paralegals, for staff with delegated professional tasks). AI is the new category being added to that inquiry.


The Shadow AI Problem

Many law firm and accounting firm owners will answer Question 1 ("Do you have a written AI policy?") with "no, because we don't use AI tools in client work."

Before relying on that answer, audit it.

Research on professional services firms consistently finds a gap between personal AI use and firm policy coverage: the majority of staff at firms without a formal AI policy are using personal AI tools (ChatGPT, Claude, Gemini, Copilot) on firm work anyway. If a paralegal is using ChatGPT to research case facts, if an associate is using Claude to draft contract provisions, if an accountant is using Copilot to generate financial analysis — those uses are happening whether or not the firm has a policy.

The shadow AI liability risk is not theoretical. When a client claims harm from an AI-assisted output, the discovery question will be: what AI tools were used, by whom, and what oversight existed? "We don't have an AI policy because we don't use AI" is not a defensible answer if staff were using it anyway.


The Minimum Viable AI Governance Package

The goal here is not ISO 42001 certification (appropriate for large firms; cost-prohibitive for a 10-person practice). The goal is documentation that demonstrates reasonable professional controls — enough to satisfy a renewal application question and enough to defend a coverage dispute.

Four components, achievable in a weekend:

1. Written AI use policy (1-2 pages) State clearly: which AI tools the firm uses, for which tasks (document drafting, research, client communication, internal administration), and the human review requirement for each category. Be specific — "we use CoCounsel for legal research with citation verification before any citation is used in a filing" is more useful than "we review AI outputs."

2. Output verification protocol A documented workflow — even a checklist — that captures what happens between AI output and client delivery. For law firms: who reviews it, what they check (citations, factual accuracy, applicable law, professional judgment), and what they sign off on. For accounting firms: who reviews AI-generated analysis, what the formula and source verification steps are, and what the professional review sign-off looks like.

3. Staff training record A record showing that all staff who use AI tools in client-related work have been briefed on the firm's AI policy and what the verification requirements are. This doesn't require an LMS or a formal training program — a documented team meeting where the policy was reviewed and attendees are listed is sufficient.

4. Client disclosure language Standardized language for engagement letters that discloses your firm's use of AI tools where required by bar rules, state law, or court standing orders. ABA Opinion 512 requires informed consent for use of AI with client confidential information. Several states require disclosure in client communications. Get a template in your standard engagement letter now, before a client or a court asks.


The Timeline

The insurance market's incorporation of AI governance questions will not happen uniformly or immediately. Some carriers will move in 2026 renewal cycles; others will take 18-24 months. But the underlying dynamic — AI claims accumulating, courts defining the standard of care, state laws expanding liability exposure — is already in motion.

The firm owners who will be best positioned at their 2027 renewal are the ones who built the governance documentation in 2026, not the ones who read about it in their renewal packet.

The same four-document package that satisfies an underwriter also satisfies bar compliance, state AI disclosure requirements, and court standing orders. Building it once covers all four obligations.


Sources: Wiley Law: 2026 State AI Bills That Could Expand Liability, Insurance Risk (March 2026) | Greenberg Traurig eDiscovery Watch — court sanctions pattern (March 2026) | [In re Nwaubani, 2026 WL 687194 (4th Cir. Mar. 11, 2026)]

Related Reading

Frequently Asked Questions

Are professional liability insurers really asking about AI governance now?

The pattern is emerging, not universal yet. Wiley Law's March 2026 analysis of state AI bills specifically identified the insurance risk dimension: state laws that create private rights of action for AI-caused harm (Oregon HB 4154) and mandate professional oversight of AI (NH SB 640) expand professional liability exposure beyond traditional E&O coverage. Insurers are responding. Early-adopting carriers are already incorporating AI governance questions into renewal applications for law firms and accounting firms. The remainder of the market will follow as AI-related claims begin to accumulate.

What three questions is my professional liability carrier likely to start asking about AI?

Based on the emerging pattern from Wiley Law and related legal analysis: (1) Do you have a written AI use policy? Carriers are beginning to treat an undocumented AI practice the same way they treat an undocumented data security practice — as an underwriting risk. (2) Do you verify AI-generated outputs before delivering them to clients? This is the core professional duty question: did a licensed professional review and approve the AI output, or did it go to the client directly? (3) Have you trained staff on AI risk? Specifically: do employees know what AI hallucination is, what the firm policy is on AI use in client-facing work, and what the verification requirement is?

How does AI liability connect to existing E&O coverage?

Traditional E&O coverage responds to professional errors and omissions — situations where a professional's advice or work product was wrong, causing client harm. AI doesn't create new categories of liability; it creates new pathways to existing professional liability. If an attorney files a brief with AI-generated citations that don't exist, that's a professional failure within the scope of traditional E&O coverage — but the AI involvement may raise underwriting questions about whether your policy covers AI-assisted work, what your review procedures were, and whether you had adequate controls. The risk is not just whether your carrier pays; it's whether they pay without a coverage dispute.

What does a minimum viable AI governance policy look like for a small professional services firm?

Four components: (1) AI use policy — a written document stating which AI tools the firm uses, for which tasks, and with what human review requirements before output reaches a client. This can be one page. (2) Output verification protocol — a documented workflow showing that AI-generated work product is reviewed by a licensed professional before delivery. For law firms: citation verification, factual accuracy check, and application of professional judgment to AI conclusions. For accounting firms: formula verification, source confirmation, and professional review of AI-generated analysis. (3) Staff training record — documentation that all staff who use AI tools have been briefed on firm policy and AI risk. (4) Client disclosure language — standardized language for engagement letters, service agreements, and matter-specific communications that discloses AI use where required by bar rules, state law, or court standing orders.

What if I don't currently use AI tools in client work? Do I still need an AI governance policy?

You need to know the answer to that question. If you've never assessed whether your staff uses AI tools in their work, you likely have a shadow AI problem: employees using personal AI tools (ChatGPT, Claude, Copilot) on firm work without a policy in place. The Compliance Week/AI governance research found that 83% of professional services organizations report governance gaps, and the gap between personal AI use (92%) and firm AI policy coverage (34%) is the most common pattern. Your carrier is not only interested in what you officially use — they will ask whether you have controls in place for what staff might use.

Get the weekly briefing

AI adoption intelligence for accounting, law, and consulting firms. Free to start.

Free weekly digest. No spam. Unsubscribe anytime.