Your AI Tools Are Not Covered. Here's What Your Professional Liability Policy Actually Says.

Published April 14, 2026 · By The Crossing Report · 17 min read

Published: April 14, 2026 | By: Martin Adey | 14 min read


Here's the scenario. Your firm has been using AI tools on client work for the past year — drafting correspondence, reviewing documents, preparing analyses. A partner runs a tax return through an AI-assisted tool. The output contains an error. It reaches the client. A complaint is filed.

You call your insurance broker. And you hear something you didn't expect: "That claim may not be covered under your current policy."

This is the kind of intelligence premium subscribers get every week.

Deep analysis, cross-sector patterns, and the frameworks that help professional services firms make the crossing.

That scenario is not hypothetical. It's what ABA Journal and ALPS Insurance have been documenting since 2025: most professional liability policies written before 2024 have an unaddressed gap for AI-assisted service delivery. The policy language defines "professional services" in terms of licensed-professional judgment — and if AI output wasn't documented as reviewed by a licensed professional before it reached the client, insurers are beginning to argue that covered professional judgment was never exercised.

This guide covers what the gap is, what 2026 insurers are doing about it, how new state AI laws are adding to your exposure, and the three documents you need before your next renewal — whether or not your carrier has started asking for them yet.


The Coverage Gap: What Your Policy Actually Says

Most professional liability and E&O policies define "professional services" in terms of acts, errors, or omissions committed in the performance of licensed-professional judgment. That definition was written in a world where a licensed professional was the only one producing work product.

AI changes the equation. When an AI tool produces output that goes to a client, the insurer's central question is: was there a licensed professional exercising judgment over that output, or did AI output go directly to the client?

If your documentation can't answer that question — if there's no record that a CPA reviewed the AI-assisted return, no attorney sign-off on the AI-drafted motion — then the insurer's position is that professional judgment may not have been exercised. And if professional judgment wasn't exercised, the policy's coverage condition may not have been met.

The practical test: Pull out your current professional liability or E&O policy. Find the definition of "professional services." Does the language contemplate AI-assisted work? If it doesn't mention AI at all, you have the gap. The policy was written as if every piece of work product came directly from a licensed professional's hands — because when it was written, that was true.

The insurance market has already begun responding with formal exclusion language. ISO Forms CG 40 47 and CG 40 48 — the template endorsements used across most commercial general liability policies — now explicitly exclude claims arising from generative AI outputs. E&O and professional liability carriers are following the same direction, adding AI exclusion language as a standard clause at renewal. The ABA Journal flagged it in 2025: these exclusions often appear without the policyholder being explicitly notified. You may have a coverage gap in your current policy right now and not know it.


What the 2026 State AI Bills Are Adding to Your Exposure

Even if your professional liability policy holds up, two 2026 state AI bills are creating new liability paths that sit entirely outside traditional E&O scope.

Oregon HB 4154 creates a private right of action for AI-caused harm. That means a client harmed by your firm's AI-assisted work can bring a direct statutory claim against you — without having to prove traditional professional negligence. The exposure path is new and sits outside the coverage your E&O policy was written to address.

New Hampshire SB 640 mandates documented professional oversight of AI in certain decision-making contexts. The law creates specific duty-of-care requirements for professionals using AI tools that touch decisions affecting clients. If your oversight practice doesn't meet the standard, you have documented regulatory exposure — which may also not be covered by your current E&O policy.

Wiley Law's March 2026 analysis of these bills describes them as creating "expanding liability exposures beyond what current policies were written to cover." The mechanism is the same in both cases: the bills create statutory duties or causes of action that are newer than your policy language, which means the match between what you're exposed to and what you're covered for has drifted.

Map to firm type: Law firms in states with attorney-supervision AI rules face the sharpest intersection — the state bar obligations, the ABA Opinion 512 competency standard, and the new statutory exposure are all pulling in the same direction at once. Accounting firms face a parallel but distinct exposure: the IRS has no formal AI guidance for return preparation as of April 2026, creating a regulatory gap that compounds the E&O gap. Firms in Oregon and New Hampshire should move first. Firms in other states should watch the legislative calendar for 2026–2027 — similar bills are moving in at least six other states.


What Insurers Are Doing in 2026

The professional liability market is not moving as one. Carriers are at three distinct stages, and where your carrier sits determines what you're dealing with at renewal.

Early adopters (a minority of carriers, but the direction the market is moving): These carriers have already added AI governance questionnaires to renewal applications. They're offering AI policy riders — endorsements that explicitly extend E&O coverage to AI-assisted service delivery, conditioned on documented governance. Some are providing premium incentives for firms aligned with the NIST AI Risk Management Framework. ALPS Insurance — the specialist carrier for law firms — has been among the most explicit, publishing guidance on exactly what governance documentation is required for affirmative coverage. Munich Re is among the first reinsurers offering an AI coverage rider for law firms.

The majority of carriers (silent): Policy language hasn't been updated. AI use is neither explicitly covered nor excluded. This feels like the safe middle — but silence in an insurance contract is not coverage. A claim involving AI will trigger a coverage question, and the outcome will depend on how the existing policy language applies to a situation it was never written to address.

Late adopters (emerging): These carriers are beginning to raise premiums at renewal for firms with significant AI use and no documented governance infrastructure. They're not yet asking detailed questions, but they're flagging risk through pricing. Premium increases without explicit conversation about AI coverage are the early signal that the gap is being priced in, not closed.

The direction of travel is clear: documented governance is the path to affirmative coverage; undocumented AI use is the path to exclusions, disputes, and premium uncertainty.


The Three Questions Your Carrier Will Start Asking

Whether your carrier is in the early-adopter or silent category today, these three questions are the pattern emerging from the carriers who are already asking. The firms that have these answers ready will have the shortest, least expensive renewal conversations.

One: Do you have a written AI use policy?

Carriers are beginning to treat an undocumented AI practice the same way they treated an undocumented cybersecurity practice a decade ago: as an underwriting risk that requires either documentation, a premium adjustment, or an exclusion. A written policy that names your approved AI tools, defines which tasks they can be used for, and sets human review requirements before AI output reaches a client is the threshold document. Without it, you have nothing to put in front of your carrier.

Two: Do you verify AI-generated outputs before client delivery?

This is the core professional duty question — and it's the same question a court or bar association would ask. Did a licensed professional review and approve the AI output before it reached the client? For law firms, ABA Opinion 512 places the competence obligation on the supervising attorney. For accounting firms, AICPA AI guidance creates data confidentiality obligations for CPAs using AI with client data. The carrier's version of this question is simple: can you show that a human with a license signed off on every AI-assisted deliverable?

Three: Have you trained staff on AI risk?

Shadow AI — staff using personal AI subscriptions (ChatGPT, Claude, Microsoft Copilot) on client matters without a firm policy — is the exposure neither the firm owner nor the carrier can see. Compliance Week research found that 83% of organizations use AI but only 25% have strong governance frameworks. If an incident traces back to an undisclosed personal AI tool and you told your carrier your firm doesn't use AI, the coverage dispute is compounded by a potential misrepresentation question. Training records — who was briefed, on what, when — are how you demonstrate you addressed this.


The AI Policy Rider: What It Is and How to Get One

An AI policy rider (sometimes called an AI governance endorsement) is an amendment added to your E&O or malpractice policy that explicitly extends coverage to AI-assisted service delivery. Think of it as the insurer formally agreeing, in writing, that your use of AI in client work is covered — rather than leaving that question to be resolved at claim time.

As of April 2026, a minority of carriers offer them. That number is growing, but most firm owners won't hear about riders unless they ask. Your broker may not have a product to offer yet — but that conversation itself is informative.

Typical conditions for coverage under an AI rider:

  • A written AI use policy (approved tool list, permitted tasks, human review requirements)
  • A documented output verification protocol (evidence that a licensed professional reviewed AI work before client delivery)
  • Staff training records (who was briefed, on what AI tools and risks, when)

How to request one: Call your current broker and ask: "Do you have an AI rider or AI governance endorsement available for my professional liability policy?" If the answer is no, ask when they expect to offer one. If they don't know, that's a signal to evaluate whether your current broker is tracking the market. Independent brokers who specialize in professional services coverage are ahead of generalist brokers on this.

Cost context: Where riders are available, they typically add 5–15% to the base premium. That's a relatively small number compared to the exposure it closes — and compared to the cost of defending an uncovered claim.


What to Do Before Your Next Renewal: The 3-Document Protocol

These three documents take roughly half a day to create. They reduce your exposure whether or not your carrier is asking for them yet. When carriers do ask, you're ready. When a claim is filed, they're the documentation that shows professional oversight was in place.

Document 1: Written AI Use Policy

One to two pages. It should cover: which AI tools are approved for firm use, for which specific tasks, what the human review requirement is before AI output reaches a client, what's prohibited (client data in non-enterprise tools, AI output delivered without review), and who owns compliance. This document is the foundation for every carrier conversation and every coverage argument. The ABA Opinion 512 framework and the AICPA AI guidance both assume this kind of written standard exists.

For a template you can adapt for your firm type, see: How to Write an AI Policy for a Professional Services Firm (2026).

Document 2: AI Output Review Checklist

Task-specific. What does a licensed CPA review before signing an AI-assisted return? What does an attorney review before sending AI-drafted correspondence to a client? One page per task type is sufficient. The checklist doesn't need to be elaborate — it needs to exist and be followed. This is the documentation that answers the carrier's output-verification question and, if a claim is ever filed, demonstrates that professional judgment was exercised.

Document 3: Staff Training Record

A log of who was trained, on what, and when. A signed acknowledgment form confirming that each staff member received firm AI policy training is sufficient as a starting point. What matters is that it exists and that it's updated when new staff join or when tools change. This document closes the shadow AI exposure by establishing a documented standard of care.

Together, these three documents define the governance floor for professional services firms using AI in 2026. They took a decade to become standard for cybersecurity. The AI equivalent is arriving faster.


By Firm Type: Where the Coverage Gap Is Sharpest

The gap is real across professional services. But it concentrates differently depending on your firm type.

Law firms

Attorney supervision ethics rules intersect directly with E&O coverage in a way they don't for other firm types. ABA Opinion 512 establishes that attorneys have a competency obligation to supervise AI-generated work product. State bar equivalents (COPRAC in California, others following) are making that obligation explicit. The convergence is dangerous: if AI-generated work product wasn't reviewed by a licensed attorney under a documented oversight protocol, you may have ethics exposure, malpractice exposure, and an E&O coverage gap simultaneously.

Federal courts have begun sanctioning attorneys for AI citation errors (2025–2026). Those sanctions create a documented record that AI use without adequate oversight caused harm — which is exactly the fact pattern that triggers malpractice claims.

CPA and accounting firms

The IRS has not issued formal guidance on AI-assisted return preparation as of April 2026. That regulatory silence creates a gap that compounds the E&O gap: there's no federal standard you can point to demonstrating your AI-assisted practice met the applicable standard of care. Basis AI and similar tools are already handling complex return preparation with minimal human oversight — but the insurance industry has not yet aligned with that practice.

The AICPA AI guidance places data confidentiality obligations on CPAs using AI with client data. If your AI tool is a non-enterprise consumer product and you're running client financial data through it, you may have a AICPA compliance issue alongside your E&O exposure.

Consulting and staffing firms

E&O coverage for consulting and staffing firms is typically broader than for law or accounting — but the governance infrastructure is often less structured. The exposure is AI-generated deliverables that clients rely on for consequential decisions: market analyses, strategic forecasts, candidate recommendations, workforce plans. When a client acts on an AI-generated recommendation and suffers a harm, the liability question is: was there a professional exercising judgment over that recommendation, or did AI output go directly to the client?

Staffing firms face an additional layer: AI-assisted screening and matching decisions that may intersect with employment law. If your AI-assisted hiring recommendation is later found to have produced a discriminatory outcome, the exposure may sit outside your current E&O policy entirely.


Frequently Asked Questions

Does professional liability insurance automatically cover AI mistakes?

No — not automatically, and increasingly not at all for undocumented AI use. Most professional liability and E&O policies define covered "professional services" in terms of licensed-professional judgment. If AI-generated output was not documented as reviewed by a licensed professional before reaching the client, the insurer's position may be that covered professional judgment was never exercised. ISO Forms CG 40 47 and CG 40 48, now appearing as standard clauses in commercial general liability renewals, explicitly exclude claims arising from generative AI outputs. E&O and professional liability carriers are following the same direction. If your policy was renewed in the past 18 months without a specific conversation about AI coverage, you likely have a gap you don't know about. The ABA Journal documented this pattern in 2025: exclusions appear in renewal documents without explicit notification to policyholders.

What is an AI policy rider for professional liability insurance?

An AI policy rider (or AI governance endorsement) is an amendment added to your E&O or malpractice policy that explicitly extends coverage to AI-assisted service delivery. It's the insurer formally agreeing in writing that AI-assisted professional work is covered — rather than leaving that question to be resolved at claim time. As of April 2026, a minority of carriers offer them, but that number is growing. Typical conditions: a written AI use policy naming approved tools and tasks, a documented human review protocol, and staff training records. Cost where available: 5–15% added to base premium. How to request: ask your broker directly whether they offer an AI rider or AI governance endorsement. If they don't have a product yet, ask when — and consider whether your broker is tracking this market.

What documentation should I have before my next insurance renewal?

Three documents. First: a written AI use policy — one to two pages naming which tools are approved, for which tasks, and what human review is required before AI output reaches a client. Second: an AI output review checklist — task-specific documentation showing that a licensed professional signed off on AI-assisted work product before client delivery. One page per task type is sufficient. Third: a staff training record — a log of who was trained on firm AI policy and AI risk, what tools were covered, and when. A signed acknowledgment form works. These three documents take half a day to create and represent the governance documentation carriers are moving toward requiring. They also protect you if a claim is filed, regardless of what your policy says.

How are 2026 state AI laws affecting my professional liability exposure?

Two bills in particular are creating new liability paths outside traditional E&O scope. Oregon HB 4154 creates a private right of action for AI-caused harm — clients harmed by AI-assisted professional work can bring a direct statutory claim without the usual malpractice proof standard. New Hampshire SB 640 mandates documented professional oversight of AI in certain decision contexts, creating explicit duty-of-care requirements that may not align with what your current policy was written to cover. Wiley Law's March 2026 analysis describes these as "expanding liability exposures beyond what current policies were written to address." Law firms in attorney-supervision rule states face the sharpest intersection. Firms in Oregon and New Hampshire should move now. Firms elsewhere should watch the 2026–2027 legislative calendar — similar bills are moving in at least six other states.

What should I ask my insurance broker about AI coverage?

Three specific questions. One: does my current professional liability or E&O policy explicitly cover or exclude claims arising from AI-assisted professional work or AI-generated outputs? Two: is there a rider or endorsement available that extends affirmative coverage to AI-assisted service delivery, and what governance documentation does it require? Three: what documentation — written AI policy, output review protocol, staff training records — would reduce my premium, confirm coverage, or prevent an exclusion at renewal? Get the answers in writing. If your broker doesn't know, that's a signal to find a broker who specializes in professional services coverage and has been tracking the 2025–2026 market shift. The brokers who are ahead of this are already advising their firm clients to build governance documentation before the next renewal cycle.


What to Do This Week

The professional liability insurance market is moving faster than most firm owners realize. ISO exclusion forms are already standard. Carrier questionnaires are already asking about governance at forward-thinking brokers. Oregon and New Hampshire have already changed the statutory landscape.

Your action this week: one conversation with your broker before your next renewal. Three questions — does my policy cover AI-assisted work, is there a rider available, what documentation do I need to have? Write down the answers.

If the gap exists, the 3-document protocol is where you start closing it. Three documents, half a day, before your next renewal conversation. That's the crossing.


The rules around AI and professional liability are being rewritten in real time. The Crossing Report tracks every insurer move, state AI bill, and governance standard that affects 5–50 person professional services firms — before your renewal conversation. Subscribe to The Crossing Report →

Related Reading

This is a sample issue — new ones go to subscribers

New issues of The Crossing Report ship exclusively to subscribers every week. Free in your inbox.

Free weekly digest. No spam. Unsubscribe anytime.