Your Staff Is Already Using AI With Client Data. Here's What to Do About It.

Published April 9, 2026 · By The Crossing Report

Published: April 9, 2026 | By: The Crossing Report | 11 min read


Summary

79% of legal professionals now use AI tools. Only 30% of law firms have a formal AI policy in place. That gap means your staff — right now — may be pasting client financials, engagement letters, settlement agreements, and tax returns into consumer AI tools that were not designed to hold that data.

The bar associations and CPA boards have noticed. They've made clear that it's your liability, not the software vendor's.

The answer is not a blanket ban. Bans drive the problem underground — usage goes on invisibly, without even the pretense of caution. The answer is a simple, enforceable policy that takes about two hours to build. This article walks through exactly how to do it.


Shadow AI Is Already Inside Your Firm

Over 70% of employees admit to using unapproved AI tools at work. IBM's 2025 Cost of Data Breach Report puts the average cost of an AI-associated breach at more than $650,000.

The failure mode is mundane, not dramatic. A junior associate pastes a client's prior-year tax return into a free ChatGPT session to "summarize the key numbers." A paralegal drops a settlement agreement into Claude to "draft a follow-up letter." A bookkeeper uploads a client's bank statements to an AI tool they found in a LinkedIn thread.

Those free-tier tools — unless accessed through enterprise accounts with specific data handling agreements — may use that input to train future models. Most employees don't know this. Most firm owners don't know it's happening at all.

The consequence: If a client's data is exposed through an unauthorized AI tool, you own that incident professionally, ethically, and legally. "I didn't know my staff was using it" is not a defense under professional ethics rules.


What the Professional Bodies Have Said

The guidance is no longer theoretical.

ABA Formal Opinion 512 (July 2024) — the ABA's first ethics guidance on generative AI — establishes that lawyers using AI must uphold competence, confidentiality, communication, and candor obligations under the Model Rules. The critical finding: boilerplate consent language in engagement letters is not adequate for using client data in AI tools. Attorneys need specific informed consent. The opinion also warns that multiple lawyers at the same firm using the same AI tool could result in inadvertent cross-client data exposure.

Texas State Bar Ethics Opinion No. 705 (February 2025) reinforces that lawyers "must be extremely cautious about inputting confidential information into AI tools."

Journal of Accountancy (February 2026) specifically flagged shadow AI — unauthorized consumer tools processing client financials — as one of the top 15 risks CPAs face. The AICPA and CPA.com have both issued guidance. Colorado CPA Society published a January 2026 piece titled Responsibly Navigating Data and Artificial Intelligence in Accounting.

New York now treats the duty of competence (RPC 1.1) as requiring a working knowledge of AI risks. Mandatory cybersecurity CLE is a hard-stop for NY attorneys' biennial registration.

If you're in a regulated profession, this is binding professional guidance, not optional reading.


The Three-Tier Classification System

Firms that handle this well don't build a 50-page AI policy. They build a three-tier classification:

Green — Approved for Client Work

Enterprise-licensed AI tools with explicit data handling agreements, no training on your inputs, and contractual confidentiality protections. These tools have signed agreements about how your data is handled.

Examples in 2026:

  • Claude for Work (Anthropic enterprise) — inputs not used for training, enterprise data isolation
  • Microsoft 365 Copilot — covered by Microsoft's enterprise data handling terms
  • Clio Manage AI — included in Clio's enterprise terms on appropriate plans
  • Karbon AI — covered under Karbon's enterprise data processing agreement
  • Harvey — enterprise legal AI with explicit confidentiality protections
  • TaxDome AI — covered under TaxDome enterprise terms

Your threshold for this tier: Does the vendor have a signed DPA that explicitly prohibits using your inputs for model training and commits to enterprise-grade data isolation? If yes, approved. If no, it belongs in Permitted or Prohibited.

Yellow — Permitted for Internal Use Only

Consumer AI tools that are useful but not cleared for client data. You can use ChatGPT to draft a firm blog post, brainstorm marketing language, or summarize a public article. You cannot use it to draft a client deliverable from their documents.

Red — Prohibited

Free-tier tools with unclear data policies, browser extensions that capture page content, and any tool you haven't vetted. Not because they're dangerous per se — but because you can't verify what they do with input data.


The Two-Hour AI Data Policy

Most small firms don't need an outside consultant to write their AI data policy. They need a clear framework and two focused hours.

Part 1: Inventory (30 minutes)

Before you can write a policy, you need to know what AI tools your team is actually using. Don't guess — ask.

Send a short anonymous survey (5 questions, Google Form):

  1. What AI tools do you use for work tasks? (List them.)
  2. What types of work tasks do you use them for?
  3. Have you ever input client names, documents, or financial information into an AI tool?
  4. Do you know whether those tools store or use your inputs?
  5. What would help you use AI more effectively in your work?

The survey serves two purposes: it gives you actual data on your exposure, and it signals to your team that you're thinking about this carefully — not reactively banning everything.

Part 2: Build the Three-Tier List (45 minutes)

Using the inventory results, build your approved/permitted/prohibited list. Start with the tools your team actually uses, then add any tools you're evaluating.

If you're unsure about a tool you're already using: email the vendor and ask directly. "Does your product use user inputs to train AI models? What is your data processing agreement? Can you provide a DPA for our firm?" Their response tells you exactly where they belong on your list.

Part 3: Write the Policy Document (30 minutes)

Write it as a one-page memo, not a legal document. The goal is clarity.

Sample structure:

[Firm Name] AI Use Policy — Effective [Date]

Our firm uses AI tools to improve efficiency. To protect client confidentiality and meet our professional obligations, we follow these rules:

Before using any AI tool for work, ask: does this tool appear on the Approved list?

Approved tools (client work permitted): [Your list] These tools have signed agreements about how client data is handled. You may use them for client-related work. Always review AI output before using it.

Permitted tools (internal use only, no client data): [Your list] You may use these tools for internal tasks (writing, brainstorming, personal research). Never input client names, documents, financial data, or any matter-specific information.

Prohibited tools: Any tool not on the Approved or Permitted list.

Questions? Ask [Name]. Before using a new AI tool for anything work-related, run it by [Name] first.

The most important element is the designated person staff can ask. Without a clear contact, people default to doing nothing or using their judgment — which is how you end up with the shadow AI problem in the first place.

Part 4: Roll It Out (15 minutes)

  1. A brief team meeting (15 minutes): explain why this matters, walk through the three tiers, answer questions. Make clear that the goal is enabling AI use safely — not restricting it. The most important thing you say: "If you're unsure about a tool, ask [Name] before you use it. There's no penalty for asking."

  2. A written record: send the policy as an email or post it in your team communication tool. The record that it was communicated matters.

  3. A quarterly check-in: the AI tools landscape changes fast. Review your approved/prohibited list every three months. New tools appear; existing tools update their policies.


The Disclosure Protocol

One additional element the bar associations are starting to require: disclosure when AI is used in client work.

ABA Formal Opinion 512 requires informed client consent — not blanket engagement letter boilerplate — for using client data in AI tools. The practical approach for small firms: build one paragraph into your standard engagement letter and one standard disclosure line for AI-assisted deliverables.

Sample engagement letter paragraph:

"We use artificial intelligence tools in our practice to improve efficiency and quality. Where AI tools are used in preparing deliverables for your matter, they are reviewed and verified by a licensed [attorney/accountant/consultant] before delivery. All AI tools used have data handling agreements in place that protect the confidentiality of your information. If you have questions about our AI use or prefer that AI tools not be used in your matter, please discuss this with us."

Sample deliverable disclosure line:

"This [document/analysis/report] was prepared with AI assistance and reviewed by [Professional Name]."

Most clients will not object. The clients who care about this will appreciate that you told them. And you've satisfied the disclosure obligation cleanly.


The Action This Week

Run the 5-question survey. Before you build any policy, you need to know what your team is actually using. The survey takes 10 minutes to create and 5 minutes for each team member to complete — and it gives you a real picture of your current exposure.

Send it this week. By next week, you'll have the data to build your approved/prohibited list — and you'll probably find that your team is already using 4–6 AI tools you didn't know about.


Related Articles


The Crossing Report publishes weekly AI adoption intelligence for accounting, law, and consulting firms. Subscribe free →

Frequently Asked Questions

What is shadow AI and why does it matter for professional services firms?

Shadow AI refers to AI tool usage that happens outside of firm visibility, approval, or policy. Over 70% of employees admit to using unapproved AI tools at work. For professional services firms, the risk is specific: staff using free-tier consumer AI tools (ChatGPT free, Claude.ai free, etc.) may be inputting client names, documents, financial data, and settlement agreements into platforms that haven't signed data protection agreements with your firm. IBM's 2025 Cost of Data Breach Report puts the average cost of an AI-associated breach at $650,000. 'I didn't know my staff was using it' is not a defense under professional ethics rules.

What AI tools are safe to use with client data?

Safe tools for professional client work are those with signed Data Processing Agreements (DPAs) or Business Associate Agreements (BAAs) that explicitly prohibit using your inputs for model training and commit to enterprise-grade data isolation. In 2026, these commonly include: Claude for Work (Anthropic enterprise), Microsoft 365 Copilot, Clio Manage AI (on appropriate plan), Karbon AI, Harvey, and TaxDome AI. Consumer/free-tier versions of the same tools may not meet this bar.

How do I build an AI policy for my firm without hiring a consultant?

A two-hour process works for most small firms: (1) 30 minutes to survey your team on what AI tools they actually use (anonymous 5-question survey), (2) 45 minutes to build a three-tier classification — approved for client work, permitted for internal use only, prohibited — based on whether each tool has a signed DPA, (3) 30 minutes to write a one-page policy memo using the three-tier framework, (4) 15 minutes to roll it out in a brief team meeting. The key element is naming a designated person staff can ask before using a new tool.

What do the ethics rules say about AI and client data confidentiality?

ABA Formal Opinion 512 (July 2024) establishes that lawyers using AI must uphold competence, confidentiality, communication, and candor obligations. It warns that boilerplate consent language is not adequate — attorneys need specific informed consent for using client data in AI tools. Texas Opinion 705 (February 2025) reinforces that lawyers must be 'extremely cautious' about inputting confidential information into AI tools. The Journal of Accountancy (February 2026) flagged shadow AI as one of the top 15 risks CPAs face. New York now treats AI risk awareness as part of the duty of competence.

Get the weekly briefing

AI adoption intelligence for accounting, law, and consulting firms. Free to start.

Free weekly digest. No spam. Unsubscribe anytime.