Your Staff Is Using AI on Client Work Right Now — and Your Firm Has No Policy
Published October 14, 2025 · By The Crossing Report
Published: March 14, 2026 | By: The Crossing Report | 7 min read
Here are two data points from the same week:
The Wolters Kluwer 2026 Future Ready Lawyer Survey found that 92% of legal professionals personally use AI tools — tools like ChatGPT, Claude, and Gemini in their own daily lives.
The 8am Legal Industry Report (March 2026) found that only 34% of firms have formal AI adoption and 43% of firms have no AI policy at all — and no plans to create one.
Do the arithmetic.
If 92% of your staff use AI tools personally, and your firm has no policy governing their use on client matters, a meaningful portion of that personal use is already crossing into work. It's not hypothetical. It's happening in your firm right now. The question isn't whether — it's how much, with what data, and with what risk exposure you're carrying without knowing it.
This is what practitioners are calling "shadow AI." And the 57-point gap between personal adoption (92%) and firm governance (34%) is the single most underreported liability story in professional services right now.
Summary
The Wolters Kluwer 2026 survey found 92% of legal professionals personally use AI tools, while only 34% of firms have formal AI adoption policies. The arithmetic is clear: your staff is using AI on client work whether or not your firm has a policy. A minimum viable policy takes less than two hours to write — the distance between 'no policy' and 'compliant' is a decision, not a project.
What Shadow AI Looks Like in Practice
The scenarios aren't dramatic. They're mundane.
A paralegal has a client deadline. She drafts a memorandum in ChatGPT because she's been using it at home for months and it's faster. She edits the output, reviews it, sends it to the attorney. She thinks she's being efficient. She doesn't know that her personal ChatGPT account trains on inputs by default. The client's confidential information has been processed by OpenAI under terms no one at the firm has reviewed.
A junior accountant is preparing a client's monthly bookkeeping summary. He pastes three months of bank transactions into Claude to generate the reconciliation narrative. The output is accurate. The client's financial data has been processed through Anthropic's consumer infrastructure — which may or may not be covered by a data processing agreement, depending on the account type. The firm has no idea.
A consultant uses a free AI tool to analyze a client's strategic plan and generate a SWOT framework. The AI tool's terms of service are standard consumer terms. The document contains the client's revenue figures, competitive positioning, and unreleased product plans.
In each case: no malice, no negligence in intent, genuine efficiency gain. And in each case: potential breach of confidentiality, professional liability exposure, and regulatory risk that the firm doesn't know it's carrying.
The Three Exposure Tracks
1. Data privacy
Consumer AI tools — personal ChatGPT accounts, free Claude, default-setting Gemini — typically train on user inputs unless you specifically opt out or pay for a business tier with different terms. Even tools that don't explicitly train on inputs may retain data for abuse monitoring, quality review, or other purposes under their standard terms.
When a staff member processes client data through a consumer account, that data has left your control. For law firms, that may implicate attorney-client privilege. For accounting firms, CPA-client confidentiality rules. For consulting firms, NDAs and data handling agreements with clients. The exposure is real even if nothing goes wrong — because you've already breached the policy, even if you didn't know you had one.
2. Malpractice
AI tools make mistakes. The 8am survey found that 43% of firms have no policy for AI output review. If a staff member generates work product with an unauthorized tool, that work product goes out the door without a review checkpoint, and it contains an error — professional liability attaches to the firm. The employee's good intentions don't change the professional responsibility analysis.
Under ABA Formal Opinion 512, lawyers must maintain "reasonable understanding" of AI capabilities and limitations and verify AI-generated outputs. That standard applies even when the lawyer didn't know the work was AI-assisted — because supervision is the attorney's responsibility regardless of the tool used.
3. Regulatory
Multiple states now have mandatory or recommended disclosure requirements for AI use in professional contexts. Illinois SB 3601 (Professional AI Oversight Act) would require consumer disclosure when AI is used in professional services delivery. Washington state HB 1170 and HB 2225, passed March 13, 2026, add AI disclosure requirements. Court standing orders in federal and state courts are requiring disclosure of AI use in filings.
A firm with no AI policy has no systematic way to meet any of these disclosure requirements — because it doesn't know which work was AI-assisted in the first place.
The Minimum Viable Policy
This doesn't require outside counsel or a technology committee. A small firm can document its minimum viable AI policy in an afternoon. Four components:
1. Approved tools list
Specify which AI tools are permitted for use on client work. At minimum: enterprise/business accounts only for any tool that processes client data. The difference between a personal ChatGPT account and a ChatGPT Team or Enterprise account is enormous from a data handling perspective — the latter has data processing agreements, no training on inputs by default, and terms your firm can review.
For most small professional services firms, the practical approved list is two or three tools: a licensed AI assistant (Claude for Work, ChatGPT Team, or Microsoft Copilot for M365 subscribers), plus any AI features built into existing software (Clio Duo, CoCounsel, Copilot in QuickBooks). Everything else is not approved for use on client matters.
2. Client data rule
Define what categories of client information may be processed through AI tools, and under what conditions. A simple version: AI tools may be used to process information that is (a) not privileged, (b) not subject to a client confidentiality agreement that restricts third-party processing, and (c) not subject to special sensitivity (health information, financial information that is not otherwise subject to applicable data agreements). For work outside those boundaries, human processing only.
3. Output review requirement
Every AI-generated work product — drafts, summaries, analysis, research — must be reviewed by a licensed professional before delivery to a client. Full stop. This isn't an optional quality step. It's the standard of practice, and it's required under ABA Opinion 512 for attorneys.
Document this as a firm rule. It solves two problems simultaneously: professional responsibility and malpractice defense.
4. Client disclosure statement
Decide how your firm will answer the question "do you use AI?" before a client asks it. The answer should be affirmative, specific, and framed around what it means for the client. Not: "We use AI tools in our practice where appropriate." Instead: "Yes — we use approved AI tools to help us work more efficiently. Every output is reviewed by a licensed professional before it reaches you. We don't use AI where privilege or confidentiality requires special handling, and we're happy to discuss our specific process for your matters."
The firms that have this answer ready win the client trust conversation. The firms that stumble over it lose it.
Making the Policy Real
A policy document no one reads is not a policy. Three implementation steps:
Train once. Hold a 30-minute team meeting. Walk through the approved tools list and the client data rule. Show the difference between a personal account and a business account for the tools you've approved. Answer questions. Document attendance.
Add it to onboarding. Every new employee reads the AI policy and signs acknowledgment before starting client-facing work. This is a one-page addition to your existing onboarding process.
Add a disclosure line to engagement letters. One sentence: "We may use approved AI tools to assist in service delivery. All AI-assisted work product is reviewed by a licensed professional before delivery to you." This is the proactive disclosure that meets emerging disclosure requirements and builds client trust simultaneously.
None of this requires a technology budget. It requires 90 minutes and a decision.
This week: Check whether your firm has an approved AI tools list. If you don't, write one — it can be a single page. If you're unsure what tools your staff is using, ask. The answer will tell you how large your shadow AI gap actually is.
Related reading: ABA Opinion 512: what your engagement letter needs now | The 4th Circuit sanctions ruling: federal courts are checking AI filings | AI governance gap: 83% of organizations use AI, 25% have frameworks | AI Liability Is Now an Insurance Question — What Your Carrier Is About to Start Asking
Related Reading
- Your Firm Is in the 83%. Here's the Governance Framework to Get Into the 25%.
- Your Staff Is Already Using AI With Client Data. Here's What to Do About It.
- What Will AI Compliance Cost Your Firm? The First Real Numbers Are In
- Your AI Policy Can Fit on One Page — Here's What Goes in It
- 55% of Employers Who Cut Staff for AI Regret It — What That Means for Your Firm
- 50% of Your Staff Is Using AI You Didn't Approve — Here's the 3-Part Policy That Fixes It
- AI Staff Adoption Playbook for Firms (2026)
Frequently Asked Questions
What is shadow AI in a professional services firm?
Shadow AI is any use of AI tools by employees on client work that occurs outside the firm's knowledge, policy, or oversight. A paralegal who drafts a client memo in ChatGPT without authorization. A junior accountant who summarizes a client's financial data in Claude. A consultant who uses a free AI tool to analyze a client's strategic documents. In each case, the employee believes they're being efficient. The firm doesn't know it's happening. The client's confidential data has potentially been processed by a third-party AI system under terms the firm has never reviewed. That's shadow AI.
Why does shadow AI create liability for a professional services firm?
Three distinct exposure tracks. First, data privacy: consumer AI tools (free ChatGPT, free Claude, personal accounts) often train on user inputs by default. If a staff member processes client data through a consumer account, that data may have been used to train an AI model — a potential breach of confidentiality and, depending on the data, a violation of attorney-client privilege or CPA-client confidentiality. Second, malpractice: if a staff member uses an unauthorized AI tool to produce work product that turns out to be wrong, and the firm delivers that work product to a client, professional liability attaches to the firm — not just the employee. Third, regulatory: ABA Opinion 512 requires lawyers to maintain reasonable understanding of AI capabilities and verify outputs. State bar rules may require disclosure. If a firm has no policy, it has no way to verify that ABA 512 is being met.
How do I know if my staff is using AI tools I haven't approved?
Assume they are. The Wolters Kluwer 2026 Future Ready Lawyer Survey found that 92% of legal professionals personally use AI tools. The 8am Legal Industry Report (March 2026) found that 43% of firms have no AI policy. The arithmetic: if 92% of your staff use AI tools personally and your firm has no policy, a meaningful share of that personal use is crossing into work. The question is not whether it's happening — it's how much, with what data, and with what risk exposure.
What's the minimum viable AI policy for a small professional services firm?
Four components: (1) An approved tools list — which AI tools are permitted for use on client work, and under what conditions (business accounts only, not consumer accounts). (2) A client data rule — what categories of client information may and may not be processed through AI tools. Even approved tools should have limits. (3) An output review requirement — all AI-generated content must be reviewed by a licensed professional before delivery to a client. No exceptions. (4) A disclosure statement — how the firm will respond if clients ask about AI use. These four components can be documented in a single two-page policy. It doesn't require outside counsel. It requires an hour and a decision.
Should I tell clients my firm uses AI?
Proactively, yes — and frame it positively. Clients who learn about your AI use from you will see it as transparency and competence. Clients who discover it later, especially if something went wrong, will see it as concealment. The professional services firms winning client trust in 2026 are the ones treating AI use as a differentiator, not a secret. The Wolters Kluwer data found that firms with formal AI governance report 6-20% revenue gains. The governance isn't just about risk management — it's a competitive positioning signal to clients.