After Heppner: The 3-Step Checklist to Keep AI Work Product Protected
Published March 16, 2026 · By The Crossing Report
After Heppner: The 3-Step Checklist to Keep AI Work Product Protected
Published March 2026 | Law firm owners, accounting firm owners
A partner at a 12-person law firm called me last week with the version of the question I've heard a dozen times since the Heppner ruling came out in February. "Okay, I read the ruling. My team is freaked out. So what do we actually do?"
The Heppner ruling itself — a federal judge finding that a defendant's AI-generated documents weren't protected by attorney-client privilege — got significant coverage. What got less attention was what came out in the weeks after: multiple BigLaw firms published detailed guidance on what a privilege protection roadmap actually looks like when AI is part of your workflow.
Bloomberg Law, Venable LLP, and Morgan Lewis all published guidance in late February and March 2026. The through-line in all three is the same framework — and it gives law and accounting firms an actionable checklist, not just a warning.
What the Heppner Ruling Actually Said
The ruling was narrow but significant. In United States v. Heppner (SDNY, February 17, 2026), the court found that documents created using a consumer AI platform — in this case, Anthropic's personal-tier Claude.ai — weren't protected by attorney-client privilege because using a consumer platform voluntarily shares information with a third party, destroying the confidentiality that privilege requires.
The defendant created the documents himself, on a personal AI account, to prepare for attorney meetings. The court found that the consumer AI platform's terms of service — which permit the company to use inputs for model improvement — meant there was no reasonable expectation of confidentiality.
The ruling's exposure is clear. The roadmap out of it is what most firms haven't read yet.
The Privilege Roadmap BigLaw Published
Bloomberg Law's March 2026 analysis is titled "Groundbreaking AI Privilege Opinion Offers Roadmap for Counsel." Venable and Morgan Lewis published similar analyses in the same window. The framework they converge on is what lawyers call the Kovel doctrine.
The Kovel doctrine (from United States v. Kovel, 1961) extends attorney-client privilege to third parties that act as agents of counsel under a confidentiality agreement. It was originally applied to accountants helping attorneys understand financial information. The BigLaw guidance in March 2026 extends the logic to AI vendors.
The argument: if an AI tool operates under a data processing agreement that establishes it as a confidential agent assisting legal work — and the attorney directs the AI use, not the client — the Kovel framework provides the best available privilege protection for AI-assisted work product.
This is not settled law. It is the strongest available argument. But it requires all three elements to be present.
The 3-Step Checklist
Step 1: The attorney or professional — not the client — directs the AI use.
In Heppner, the client created the documents using AI, then shared them with counsel. That's the exposure scenario. The protected version runs the other way: the attorney uses AI as part of their own work process, under their professional supervision.
For law firms: any AI-assisted research, drafting, or analysis should be initiated and directed by the attorney working on the matter — not delegated to the client, and not performed by the client before engaging counsel.
For accounting firms: the CPA or tax attorney who owns the matter should be the one directing AI-assisted analysis. Client-facing AI tools (where clients use AI to prepare materials, then submit them) create the same exposure Heppner identified.
Step 2: Sign a data processing agreement with your AI vendor that establishes it as a confidential agent.
The consumer AI account exposure is contractual: the platform's terms permit use of your inputs. An enterprise-tier account with a negotiated DPA reverses that: the vendor contractually commits that your data stays yours, is not used for training, and is handled under confidentiality obligations.
This is also the foundation of the Kovel argument: if your AI vendor has signed a DPA that establishes the vendor as a confidential agent assisting your professional work, you have a contractual basis to argue Kovel protection.
What satisfies this: Microsoft Copilot (commercial M365 subscriptions), Claude for Enterprise, ChatGPT Enterprise, Harvey, Thomson Reuters CoCounsel, Clio Draft, Spellbook. All have enterprise DPAs.
What doesn't: ChatGPT Free, ChatGPT Plus, Claude.ai personal accounts, Google Gemini consumer tier. These platforms retain rights to use your inputs.
The practical entry point for small firms already on Microsoft 365: adding Copilot at $21/user/month gives you a DPA-protected AI environment for Word, Outlook, Excel, and Teams — the tools your team is already using.
Step 3: Document that a professional supervised and reviewed AI outputs before anything left the firm.
Morgan Lewis's guidance on this is direct: the attorney must direct every step of AI use, and the AI must operate as an agent of counsel. Documentation that this happened — even brief matter-file entries — creates the evidentiary foundation if a privilege question ever arises.
For each matter where AI is used in connection with privileged or sensitive client work, a brief entry is sufficient:
- Which AI tool was used
- That it operates under a data processing agreement
- That the professional — not the client — directed the AI use
- That outputs were reviewed by the professional before being acted on or transmitted
This takes two to three minutes per matter. It is the difference between being able to make the Kovel argument and not having a factual basis to make it at all.
For Accounting and Tax Firms
Morgan Lewis extended the Heppner reasoning to tax work explicitly in their March 2026 analysis. The exposure scenario for accounting firms: a CPA or tax attorney processes client financial data — SSNs, financial statements, tax returns — through a personal AI account. The client later faces an IRS examination or regulatory investigation. The work product protections that normally shield the firm's advisory process may not hold for anything that passed through an unsecured platform.
The three-step checklist applies identically. The only addition for accounting and tax work: pay particular attention to data types. Tax identification numbers, financial statements, and any information subject to IRS security regulations require enterprise-tier handling regardless of privilege considerations.
What to Do This Week
If you're a law or accounting firm owner who read the Heppner ruling and has been waiting for the "what to do" version:
Audit your AI tool stack. List every AI tool your team currently uses for client work. Flag any personal-tier accounts (ChatGPT Plus, Claude.ai personal, consumer Google Gemini). These are the exposure.
Upgrade the enterprise tools you already have access to. If you're on Microsoft 365 commercial, Copilot is one admin click away at $21/user/month. If you're already using Claude.ai, the enterprise tier includes the DPA your current account lacks.
Add the three-item documentation habit to your matter intake. A single paragraph in your matter-opening checklist: which AI tool, what DPA status, professional-directed and reviewed. Done.
The Heppner ruling created real exposure. The BigLaw privilege roadmap published this month gives you the framework to close it. The checklist above is three items. One afternoon to implement.
The Crossing Report is a weekly intelligence newsletter for professional services firm owners. Subscribe here.
Related reading: Your Firm's AI Conversations Aren't Private — A Federal Court Just Clarified Why
Related Reading
- AI & Attorney-Client Privilege: The Heppner Ruling — The full privilege framework for AI-assisted legal work — what's protected and what isn't
Frequently Asked Questions
What is the Kovel framework and how does it apply to AI use?
The Kovel doctrine (from Kovel v. United States, 1961) extends attorney-client privilege to third parties that act as agents of counsel under a confidentiality agreement — originally applied to accountants helping lawyers understand financial information. The framework requires: (1) the attorney retained the third party to assist in providing legal advice; (2) communications with the third party were made in confidence; (3) the third party acted as a functional extension of the attorney-client relationship. Multiple BigLaw firms — Bloomberg Law, Venable, and Morgan Lewis — published March 2026 guidance extending Kovel reasoning to AI vendors: if an AI tool operates under a data processing agreement that establishes it as a confidential agent assisting legal work, the Kovel framework may provide the best available privilege protection. This is not settled law — it is the strongest available argument — but it requires all three elements to be present.
Does the Heppner ruling affect accounting and tax firms, not just lawyers?
Yes. Morgan Lewis's March 2026 analysis ('When AI Meets Privilege: Early Court Decisions') explicitly extended the Heppner reasoning to accounting and tax work. The mechanism: when a CPA or tax attorney processes client financial data through a personal AI account, that information is shared with a third party. If the client later faces an IRS examination, regulatory investigation, or litigation, the work product protections that normally shield the firm's advisory process may not apply to anything that passed through an unsecured AI platform. The fix is the same: use an enterprise-tier AI tool with a data processing agreement, ensure the professional — not the client — directs the AI use, and document human review before outputs go anywhere.
What's the difference between attorney-client privilege and work product protection for AI use?
Attorney-client privilege protects confidential communications between an attorney and client made for the purpose of obtaining legal advice. Work product doctrine protects materials prepared by or for an attorney in anticipation of litigation. Heppner applied to attorney-client privilege — the documents at issue were prepared by the client using AI before sharing with counsel. Work product doctrine is governed by different rules but faces similar AI exposure: if a law firm prepares trial strategy documents using an unsecured personal AI platform, those documents may not be protected under work product if opposing counsel argues they were voluntarily shared with a third party. The practical takeaway: both protections require that the AI tool you use is subject to a data processing agreement that prevents the vendor from accessing or using client information.
Which AI tools satisfy the requirements for law firm work product protection?
The tools that satisfy the DPA requirement for both attorney-client privilege and work product protection are enterprise-tier products with negotiated data processing agreements: Microsoft Copilot (commercial M365 subscriptions), Claude for Enterprise, ChatGPT Enterprise, and purpose-built legal AI platforms like Harvey, Thomson Reuters CoCounsel, Clio Draft, and Spellbook. These tools have contractual commitments that they do not use your data for model training and do not share it with third parties. Personal accounts — ChatGPT Free, ChatGPT Plus, Claude.ai personal tiers, and Google Gemini consumer tier — do not provide this protection. For firms not ready to invest in a dedicated legal AI platform, Microsoft Copilot at $21/user/month added to an existing M365 commercial subscription is the most accessible compliant entry point.
Should a small law or accounting firm document their AI use for every client matter?
For matters with any litigation, regulatory, or investigation exposure — yes. The documentation burden is low: a brief entry in the matter file noting which AI tool was used, that it is subject to a data processing agreement, that the professional directed the AI use, and that outputs were reviewed before being acted on or transmitted. This takes two to three minutes per matter. The evidentiary value is significant: if a privilege question ever arises, contemporaneous documentation that the AI operated as an agent of counsel under controlled professional direction is the factual foundation of the Kovel argument. For routine transactional work with no regulatory exposure, the documentation is still good practice but the risk is lower.