Your Staff Is Already Using AI With Client Data. Here's What That Costs You.
Published: April 2026 | By: The Crossing Report
Your paralegal just sent a client's settlement agreement through a free AI tool to draft a follow-up letter. Your bookkeeper pasted a client's bank statements into a chatbot to summarize the numbers. Your junior associate dropped a prior-year tax return into a free ChatGPT session to "pull the key figures."
You didn't know. Neither did they — not really. They were solving a problem faster. But that's your liability.
79% of legal professionals now use AI tools. Only 30% of law firms have a formal AI policy in place. The same gap exists across accounting, consulting, and staffing firms — and the professional bodies have stopped treating it as a question worth debating.
That gap is not a hypothetical risk. It's your staff — right now — pasting client financials, engagement letters, settlement agreements, and tax returns into consumer AI tools that were not designed to hold that data. The bar associations and CPA boards have noticed. And they've made clear that it's your liability, not the software vendor's.
The answer is not a blanket ban. Bans drive the problem underground — usage goes on invisibly, without even the pretense of caution. The answer is a simple, enforceable policy that takes about two hours to build.
Shadow AI Is Already Inside Your Firm
Over 70% of employees admit to using unapproved AI tools at work. IBM's 2025 Cost of Data Breach Report puts the average cost of an AI-associated breach at more than $650,000.
The failure mode is mundane, not dramatic. A junior associate pastes a client's prior-year tax return into a free ChatGPT session to "summarize the key numbers." A paralegal drops a settlement agreement into Claude to "draft a follow-up letter." A bookkeeper uploads a client's bank statements to an AI tool they found in a LinkedIn thread.
Those free-tier tools — unless accessed through enterprise accounts with specific data handling agreements — may use that input to train future models. Most employees don't know this. Most firm owners don't know this is happening at all.
When I ran this exercise at my own agency — sending a five-question survey to see what tools my team was actually using — I found six AI tools I hadn't approved, three of which had been used with client-facing content. I had no idea. Most firm owners don't.
The consequence for your firm: If a client's data is exposed through an unauthorized AI tool, you own that incident professionally, ethically, and legally. "I didn't know my staff was using it" is not a defense.
What the Professional Bodies Have Said
The guidance is no longer theoretical. Three documents you need to know:
ABA Formal Opinion 512 (July 2024) establishes that lawyers using AI must uphold competence, confidentiality, communication, and candor obligations under the Model Rules. The critical finding: boilerplate consent language in engagement letters is not adequate for using client data in AI tools. Attorneys need specific informed consent. The opinion also warns that multiple lawyers at the same firm using the same AI tool could result in inadvertent cross-client data exposure.
Texas State Bar Ethics Opinion No. 705 (February 2025) reinforces that lawyers "must be extremely cautious about inputting confidential information into AI tools" and cannot charge clients for time saved by AI.
Journal of Accountancy (February 2026) specifically flagged "shadow AI" — unauthorized consumer tools processing client financials — as one of the top 15 risks CPAs face. Colorado CPA Society published a January 2026 piece titled Responsibly Navigating Data and Artificial Intelligence in Accounting. New York now treats the duty of competence (RPC 1.1) as requiring a working knowledge of AI risks.
For consulting, staffing, and marketing firms: GDPR, CCPA, and the NDAs in your client contracts create equivalent data handling obligations. An unauthorized AI tool exposing candidate data at a staffing firm, or client strategy at a consulting firm, carries the same legal and reputational weight. The absence of a bar rule doesn't mean the absence of liability.
The Three-Tier Classification
Firms that handle this well don't build a 50-page AI policy. They build a three-tier classification:
Green — Approved for client work: Enterprise-licensed AI tools with explicit data handling agreements, no training on your inputs, and contractual confidentiality protections.
Examples:
- Claude for Work (Anthropic enterprise) — inputs not used for training
- Microsoft 365 Copilot — covered by Microsoft's enterprise data handling terms
- Clio Manage AI — included in Clio's enterprise terms
- Karbon AI — covered under Karbon's enterprise data processing agreement
- Harvey — enterprise legal AI with explicit confidentiality protections
- Loxo AI (staffing firms) — enterprise recruitment CRM with no training on your inputs
- Bullhorn AI (staffing firms) — covered by Bullhorn's data processing agreement
- Jasper for Business (marketing agencies) — enterprise content AI with BAA availability
- HubSpot AI (marketing agencies) — covered under HubSpot's enterprise DPA
Yellow — Permitted for internal use only: Consumer AI tools that are useful but not cleared for client data. You can use ChatGPT to draft a firm blog post or brainstorm marketing language. You cannot use it to draft a client deliverable from their documents.
Red — Prohibited: Free-tier tools with unclear data policies, browser extensions that capture page content, and any tool you haven't vetted.
The Two-Hour AI Data Policy
Most small firms don't need an outside consultant to write their AI data policy. Here's the template.
Part 1: Inventory (30 minutes)
Before you write a policy, find out what your team is actually using. Don't guess — ask.
Send a short survey (5 questions, Google Form, anonymous):
- What AI tools do you use for work tasks?
- What types of tasks do you use them for?
- Have you ever input client names, documents, or financial information into an AI tool?
- Do you know whether those tools store or use your inputs?
- What would help you use AI more effectively in your work?
The survey serves two purposes: it gives you actual data on your exposure, and it signals to your team that you're thinking about this carefully — not reactively banning everything.
Part 2: Build the Approved/Permitted/Prohibited List (45 minutes)
Using the inventory results, classify each tool. Your threshold for the Approved tier: does the vendor have a signed Business Associate Agreement (BAA) or Data Processing Agreement (DPA) that explicitly prohibits using your inputs for model training?
If you're unsure about a tool you're already using: email the vendor and ask. Ask: "Does your product use user inputs to train AI models? What is your data processing agreement?" Their response tells you where they belong on your list.
Part 3: Write the Policy Document (30 minutes)
Write it as a one-page memo, not a legal document. The goal is clarity, not comprehensiveness.
Sample policy structure:
[Firm Name] AI Use Policy — Effective [Date]
Our firm uses AI tools to improve efficiency. To protect client confidentiality and meet our professional obligations, we follow these rules:
Before using any AI tool for work, ask: does this tool appear on the Approved list?
Approved tools (client work permitted): [Your list] These tools have signed agreements about how client data is handled. You may use them for client-related work. Always review AI output before using it.
Permitted tools (internal use only, no client data): [Your list] You may use these tools for internal tasks (writing, brainstorming, personal research). Never input client names, documents, financial data, or any matter-specific information.
Prohibited tools: Any tool not on the Approved or Permitted list.
Questions? Ask [Name]. Before using a new AI tool for anything work-related, run it by [Name] first.
The most important element is the designated person staff can ask. Without a clear contact, people default to doing nothing or using their judgment — which is how you end up with a shadow AI problem.
Part 4: Roll It Out (15 minutes)
Brief team meeting (15 minutes): Explain why this matters, walk through the three tiers, answer questions. Make clear that the goal is enabling AI use safely — not restricting it. The most important thing you say: "If you're unsure about a tool, ask [Name] before you use it. There's no penalty for asking. There is a problem if you use something you're unsure about."
Written record: Send the policy as an email or post it in your team communication tool. The record that it was communicated matters.
Quarterly check-in: The AI tools landscape changes fast. Review your approved/prohibited list every three months.
The Disclosure Protocol
ABA Formal Opinion 512 requires informed client consent — not blanket engagement letter boilerplate — for using client data in AI tools.
Sample engagement letter paragraph:
"We use artificial intelligence tools in our practice to improve efficiency and quality. Where AI tools are used in preparing deliverables for your matter, they are reviewed and verified by a licensed [attorney/accountant/consultant] before delivery. All AI tools used have data handling agreements in place that protect the confidentiality of your information. If you have questions about our AI use or prefer that AI tools not be used in your matter, please discuss this with us."
Sample deliverable disclosure line:
"This [document/analysis/report] was prepared with AI assistance and reviewed by [Professional Name]."
Most clients will not object. The clients who care about this will appreciate that you told them. And you've satisfied the disclosure obligation cleanly.
Where to Start
Run the 5-question survey. Before you build any policy, you need to know what your team is actually using. The survey takes 10 minutes to create, 5 minutes for each team member to complete, and gives you a real picture of your current exposure.
Send it this week. By next week, you'll have the data to build your approved/prohibited list — and you'll probably find that your team is already using 4–6 AI tools you didn't know about.
Related Reading
- AI Data Security for Law Firms and Professional Services — How to protect client data when using AI tools — policy templates and compliance guidance
- AI Policy Template for Professional Services Firms — A free AI policy template covering approved tools, prohibited uses, and client disclosure
The Crossing Report is published weekly for professional services firm owners navigating the AI transition. Subscribe here.
Frequently Asked Questions
What is shadow AI and why does it matter for professional services firms?
Shadow AI refers to AI tool usage that happens outside of firm visibility, approval, or policy. Over 70% of employees admit to using unapproved AI tools at work (IBM, 2025). The failure mode is mundane: a paralegal pastes a client's settlement agreement into a free AI tool, a bookkeeper uploads client bank statements to a chatbot. Those free-tier tools may use that input to train future models. If client data is exposed through an unauthorized AI tool, the firm owns that incident professionally, ethically, and legally — 'I didn't know my staff was using it' is not a defense.
What do the bar associations and AICPA say about AI and client data?
ABA Formal Opinion 512 (July 2024) established that lawyers using AI must uphold competence, confidentiality, communication, and candor obligations. Boilerplate consent language in engagement letters is not adequate for using client data in AI tools — attorneys need specific informed consent. Texas Ethics Opinion No. 705 (February 2025) reinforces that lawyers must be extremely cautious about inputting confidential information into AI tools. The AICPA flagged 'shadow AI' as one of the top 15 risks CPAs face. New York treats the duty of competence as requiring working knowledge of AI risks.
What AI tools are safe for client data in professional services?
The threshold for approved tools: does the vendor have a signed Business Associate Agreement (BAA) or equivalent Data Processing Agreement (DPA) that explicitly prohibits using your inputs for model training? Tools that qualify include Claude for Work (Anthropic enterprise), Microsoft 365 Copilot, Clio Manage AI, Karbon AI, Harvey, TaxDome AI, Loxo AI (staffing), Bullhorn AI (staffing), Jasper for Business (agencies), and HubSpot AI (agencies). Consumer free tiers of ChatGPT, Claude.ai, and similar tools should not be used with client data.
How do I build an AI data policy for my small firm?
A small firm AI data policy needs three components: (1) A tool inventory — survey your team to find out what they're actually using before you write anything. (2) A three-tier classification — Approved (client work permitted, has DPA), Permitted (internal use only, no client data), and Prohibited (not vetted). (3) A one-page policy document with a clear designated contact staff can ask before using a new tool. The whole process takes about two hours.
Do these AI data rules apply to consulting, staffing, and marketing firms?
Yes. If your firm type isn't directly regulated by the ABA or AICPA, GDPR, CCPA, and the NDAs in your client contracts create equivalent data handling obligations. An unauthorized AI tool exposing candidate data at a staffing firm, or client strategy at a consulting firm, carries the same legal and reputational weight as it does for a law or accounting firm. The absence of a bar rule doesn't mean the absence of liability.
Get the weekly briefing
AI adoption intelligence for accounting, law, and consulting firms. Free to start.
Related Reading
This is the kind of intelligence premium subscribers get every week.
Deep analysis, cross-sector patterns, and the frameworks that help professional services firms make the crossing.