The AI Tool Contract Your Firm Just Signed Probably Has a Gap — Here's What to Fix Before Something Goes Wrong
Published March 17, 2026 · By The Crossing Report
The AI Tool Contract Your Firm Just Signed Probably Has a Gap
You signed the terms of service. You checked the data processing agreement. You noted the uptime SLA.
What you probably didn't check: what happens when the AI takes an action you didn't authorize, produces work that harms a client, or handles privileged information in a way that voids your confidentiality protections.
Standard SaaS contracts were written for tools that do exactly what you tell them to do. Agentic AI tools don't work that way — and the contracts haven't caught up.
The Problem With "Standard" AI Contracts
The legal profession noticed this first. Above the Law ran a piece in March 2026 titled "AI Contracts Are Moving Faster Than The Laws. In-House Counsel Can't Wait." The core finding: AI vendors are moving to agentic deployment (tools that plan, retrieve information, and take action autonomously) while contract terms still describe passive software.
For a large company with in-house legal, that's a problem for the GC to solve. For a 10-person accounting or law firm, you probably signed the vendor's standard terms and moved on.
Here's what that standard contract typically covers:
- Service uptime and availability
- Data retention and deletion timelines
- Liability cap (usually capped at fees paid in the last 12 months — often very low)
- Confidentiality obligations for data you share with the vendor
Here's what it almost certainly does not cover:
- What the AI can and cannot do without your explicit approval
- Who is responsible when AI-generated work product causes client harm
- How the tool handles privileged communications or confidential client information
- Whether you can access an audit trail of what the AI did, in what order, and why
For passive tools — AI that suggests language you then accept or reject — this gap is manageable. For agentic tools that act on your behalf, it's not.
What "Agentic" Actually Means for Contracts
Harvey, Clio's AI suite, CoCounsel, and newer tools like August Live Assist are all moving toward agentic deployment. In practice:
- Harvey for M&A diligence: The AI retrieves documents, identifies issues, drafts a summary, and flags items for attorney review. At what point does "retrieval" become an action taken on your client's behalf?
- CoCounsel for case research: The AI searches, synthesizes, and delivers findings. If those findings contain an error and you file relying on them, who bears the cost?
- Clio Duo for client intake: The AI interacts with potential clients, gathers facts, and pre-populates matter data. If a client's privileged communication enters that workflow, what protection governs it?
The product capabilities are evolving faster than the terms. That's the gap.
Four Clauses to Add Before You Sign
You won't negotiate a custom contract with Harvey or Clio. But you can — and should — check for these four provisions before signing, and escalate to the vendor's enterprise team if they're missing.
1. Delegation scope limits
What actions can the AI take without requiring your confirmation?
A usable clause: "Vendor agrees that agentic features of the service will provide explicit disclosure to the subscribing professional before taking any action that sends, files, or submits materials to a third party on behalf of the professional or their clients."
If the contract is silent on this, the AI can take actions you didn't explicitly approve. That's the default in most current terms.
2. Failure mode liability
If AI-generated work product causes a client harm, standard liability caps are almost certainly insufficient.
For a law firm that relies on CoCounsel research containing a hallucinated citation and the client suffers a consequence, the vendor's liability cap (12 months of fees — maybe $3,600 on a base plan) is not a meaningful recovery mechanism.
Look for whether the contract distinguishes between tool failure (the product didn't work as described) and AI reasoning error (the product worked as described but the AI was wrong). Most contracts don't make this distinction. That matters.
3. Privileged information handling
Standard data processing agreements address confidentiality: the vendor won't share your data with others. That's not the same as privilege protection.
Attorney-client privilege requires the communication to have been made in confidence for the purpose of obtaining legal advice, and not disclosed to third parties outside the privilege. When an AI tool processes privileged communications as training data, context, or workflow input, the question of whether privilege has been waived becomes legally open.
For law firms specifically: before deploying any AI tool that touches client communications or matter files, the DPA needs explicit language about whether client data is used for model training, whether there is a data isolation option, and what happens to client data if you cancel the subscription.
For accounting firms: the same analysis applies to confidential financial information, tax data, and anything that would be covered by Section 7216 or state CPA confidentiality statutes.
4. Audit trail access
If something goes wrong — a client dispute, a bar complaint, a malpractice claim — you need to be able to show what the AI did, in what sequence, and based on what input.
Many AI tools provide usage logs. Most don't provide a step-by-step reasoning trail of what the AI considered and why it produced a given output.
Before you rely on an AI tool for consequential work: verify that you can retrieve an artifact that shows the AI's work product and the inputs that produced it. If you can't explain the AI's process in a client dispute, you're explaining your own.
One Clause to Add to Your Own Engagement Letters
You are on both sides of this problem. As a firm using AI, the contract gaps above expose you to vendor-side liability. As a professional services provider, AI use exposes you to client-side liability if the AI produces harmful work.
The standard engagement letter doesn't address AI. One sentence closes the most obvious gap:
"[Firm name] uses AI-assisted tools in service delivery. All AI-generated work product is reviewed by a licensed [attorney/CPA/consultant] before delivery to client."
That sentence does three things: discloses AI use (increasingly required by bar rules and regulatory guidance), establishes the human review standard, and creates a record that you are not using AI as a substitute for professional judgment.
It won't prevent a malpractice claim. But it establishes the standard of care you are operating under — which matters when the claim is evaluated.
The Question Your Clients Will Start Asking
Above the Law's March 2026 piece noted that in-house legal teams at corporations are now auditing their law firms' AI governance before issuing engagement letters. The specific question: "What are your firm's policies for AI use, how is work supervised, and what are your contract terms with your AI vendors?"
This hasn't hit small professional services firms in volume yet. It will.
A written AI use policy and a vendor contract review are the two documents that answer that question. Neither requires outside counsel. Both can be completed in a day.
The firms that do this in Q2 2026 will have it ready when the question arrives. The firms that don't will be assembling it under pressure.
Where to Start This Week
- Pull your current AI vendor contracts (Harvey, Clio, CoCounsel, Lawmatics, or whatever tools you use). Check for the four clauses above. Note the gaps.
- Add the one-sentence AI disclosure to your standard engagement letter.
- If your primary AI vendor lacks a DPA that addresses privileged information, request their enterprise DPA. Most vendors have one — it's just not in the default click-through terms.
You don't need a lawyer to do this. You need 90 minutes and the vendor's documentation.