Your Firm Is in the 83%. Here's the Governance Framework to Get Into the 25%.
Published March 3, 2026 · By The Crossing Report
Published: March 14, 2026 | By: The Crossing Report | 6 min read
A Compliance Week and konaAI survey of compliance officers published in early 2026 found a gap that should be uncomfortable reading for every professional services firm owner:
83% of organizations now use AI. Only 25% have implemented a strong governance framework.
The 58-point gap isn't unique to one industry. It mirrors the pattern in legal specifically (70% of legal professionals use AI according to the 8am 2026 Legal Industry Report, but only 9% of firms have formal governance) and in accounting (19% daily AI use but significantly less formal policy, per AICPA/CIMA). The governance gap is industry-wide, and it's the leading indicator of where professional liability problems will surface in 2026 and 2027.
If your firm uses AI tools — and the data suggests you do, whether you track it formally or not — you are almost certainly in the 83%. The question is whether you're in the 25% with governance, or the 58% without.
Here's what the gap costs and what it takes to close it.
Summary
A 2026 Compliance Week/konaAI survey found 83% of organizations use AI, but only 25% have implemented a governance framework — a 58-point gap that creates direct professional liability exposure for firm owners. For professional services firms, an AI governance framework is not optional: it's the difference between defensible AI use and malpractice exposure. Here's the minimum viable version, built in an afternoon.
Why Governance Matters Differently for Professional Services
The Compliance Week framing is about organizational risk. For professional services firms, the stakes are more personal.
When a junior accountant uses an unapproved AI tool with a client's financial data and the output contains an error, professional liability attaches to the firm — and to the CPA who supervised (or failed to supervise) the work. The employee's good intentions don't change the professional responsibility analysis.
When a paralegal drafts a client brief using a consumer AI account without a firm policy, and that brief goes out the door with AI-hallucinated case citations, the attorney of record faces bar discipline — not just malpractice claims.
The Compliance Week research specifically flags "the compliance challenge of agentic systems" — AI that doesn't just generate but acts. As AI tools move from generating text to scheduling tasks, sending communications, and taking workflow steps on behalf of users, the governance requirements multiply. An AI that takes an action in your name without a human checkpoint is a governance failure in a professional context, not just a policy question.
The emerging agentic AI story from 2026 — Intapp Celeste, GPT-5.4 computer use, workflow automation tools in Clio and QuickBooks — makes this more urgent, not less.
What's Actually at Risk
Malpractice from unreviewed AI output. An AI tool that summarizes, analyzes, or drafts is useful. An AI tool whose output goes to a client without a licensed professional reviewing it is a malpractice incident waiting to happen. Without a governance framework requiring output review, you have no systematic way to ensure the checkpoint exists.
Data breach from unapproved tools. Consumer AI tools process data under consumer terms. Consumer terms allow training on inputs by default for many providers unless explicitly opted out. If a staff member processes client data through a personal ChatGPT account, that data may have been used to train OpenAI's models — a potential breach of client confidentiality regardless of intent.
Regulatory non-compliance from lack of policy. ABA Formal Opinion 512 requires lawyers to maintain reasonable understanding of AI and verify outputs. Illinois SB 3601 would require consumer disclosure for AI use in professional services. Multiple federal courts have standing orders requiring AI disclosure in filings. Washington state passed HB 1170 and HB 2225 in March 2026, adding disclosure requirements. Without a governance framework, you can't systematically meet any of these requirements.
Client trust loss from surprise disclosure. The most common and underrated risk: a client discovers your firm uses AI and experiences it as a surprise. Not because AI use is wrong — it isn't — but because it wasn't disclosed and discussed proactively. The governance conversation with clients, done well, builds trust. Done reactively (when something went wrong, or when a client asks a pointed question), it erodes it.
The Four-Component Minimum Viable Framework
You don't need a technology committee. You need four things:
1. Approved tools list
Write down which AI tools are permitted for use on client work. "Anything" is not an answer. The list should specify:
- Which tools are approved (typically: business/enterprise accounts for ChatGPT or Claude, Microsoft Copilot for M365 subscribers, plus AI features in your existing practice management or accounting software)
- Which tools are not approved for client data (consumer accounts, free tiers with unclear data terms, tools the firm has no contract or data processing agreement with)
- Whether approval is absolute or conditional (some tools may be approved for internal use but not for work involving client data)
For most 5-20 person firms, the approved list is 2-4 tools. It takes 30 minutes to write.
2. Client data rule
Define what kinds of client information may be processed through AI tools, and under what conditions. A workable starting version: AI tools may process client information that is not subject to privilege, is not covered by a specific confidentiality obligation restricting third-party processing, and is not special categories of sensitive data (health, financial account details, personally identifiable information not covered by a business associate agreement).
For work outside those boundaries: human processing only, or approval required from a senior partner/manager.
3. Output review requirement
Every AI-generated work product — draft documents, summaries, research, analysis — must be reviewed and validated by a licensed professional before delivery to a client. Document this as a firm rule. It is already best practice; making it explicit converts it from an individual habit to a firm obligation.
For agentic tools that take actions: require human checkpoint before any AI-initiated action that affects client matters, sends client-facing communications, or creates records.
4. Disclosure statement
Decide in advance how your firm responds when a client asks about AI use. The best answer is affirmative and specific: "We use approved AI tools in our practice to work more efficiently. Every output is reviewed by a licensed professional before it reaches you. We're happy to discuss our specific approach for your matters."
This answer, given proactively before clients ask, is a trust-builder. Given reactively after a problem, it's damage control. Write it before you need it.
Making It Stick
A governance document that lives in a folder is not governance. Three implementation steps that are actually followed:
Team meeting within the week. 30 minutes. Walk through the approved tools list and the client data rule. Show the difference between a personal and business account for the tools you've approved. Answer questions. This is also when you learn which tools your staff is already using — the answers will tell you where your shadow AI gaps are.
Onboarding integration. Add the AI policy to your standard new employee onboarding. One-page document, signed acknowledgment. This ensures every new hire knows the rules before working on client matters.
Engagement letter update. Add one sentence to your standard engagement letter about AI use. It can be as simple as: "We may use approved AI tools to assist in service delivery; all AI-assisted work product is reviewed by a licensed professional before delivery." This is proactive disclosure, not a legal liability shield — but it signals competence and transparency.
The 25% Advantage
The Compliance Week data shows that being in the 25% with governance isn't just risk management. The firms with governance are the ones able to use AI confidently and at scale — because they've resolved the questions that make other firms hesitant.
The Wolters Kluwer 2026 Future Ready Lawyer Survey found that firms with proper AI governance report revenue gains of 6-20%. The governance isn't just protecting downside. It's enabling the upside.
For a professional services firm owner: the governance investment is two hours of drafting and one team meeting. The return is the ability to use every AI tool on your approved list without hesitation — and to answer every client, regulator, or bar association question about AI use with confidence.
This week: Check whether your firm has an approved AI tools list and a client data rule. If you don't, write the first draft today. If you do, verify whether your staff knows it exists.
Related reading: Your staff is already using AI on client work — and your firm may not know | ABA Opinion 512: the engagement letter AI clause your firm needs now | The 4th Circuit AI sanctions case: what it means for every attorney | 88% of Accountants See AI as Transformative — But Only 8% Are Ready | AI Liability Is Now an Insurance Question — What Your Carrier Is About to Start Asking
Related Reading
- Your Staff Is Using AI on Client Work Right Now — and Your Firm Has No Policy
- BigLaw Just Got AI-Certified — Here's the Small-Firm Version of That Credential
- What Will AI Compliance Cost Your Firm? The First Real Numbers Are In
- Your AI Policy Can Fit on One Page — Here's What Goes in It
- AI Regulation & Compliance for Professional Services Firms — Governance frameworks, compliance deadlines, and the regulatory landscape for small professional services firms
- The AI Adoption Gap — Why the gap between AI access and AI ROI is widening — and what the top-performing firms do differently
Frequently Asked Questions
What is AI governance for a professional services firm?
AI governance is the set of rules, policies, and oversight processes that determine how AI tools are used in your firm — which tools are permitted, under what conditions, for what kinds of work, with what human review requirements, and how you respond when something goes wrong. It doesn't require a technology department or a legal team. For a 5-20 person firm, AI governance is essentially a one-page policy document, a training session, and a review checkpoint in your existing workflows.
Why does the governance gap matter for professional services specifically?
Professional services firms operate under professional liability standards that don't apply to most industries. Lawyers have professional responsibility obligations. CPAs have independence and competency standards. When an employee uses an AI tool on client work without authorization, and that work contains an error the firm delivers to a client, the professional liability attaches to the firm — and to the licensed professional who supervised (or failed to supervise) the work. The governance gap isn't just an IT risk. It's a malpractice risk, a bar discipline risk, and a client trust risk.
What are the biggest AI governance risks for small firms right now?
Three categories in 2026: (1) Unapproved AI use on client data — staff using consumer AI tools (personal ChatGPT accounts, free Claude) that process client data under terms the firm never reviewed. (2) No output review requirement — AI-generated work product going to clients without a licensed professional reviewing it, creating malpractice exposure. (3) Agentic AI running unsupervised — AI tools that don't just generate but act (schedule tasks, send communications, take workflow steps) without a human checkpoint. This third category is emerging and becoming the most complex governance challenge.
What's the difference between an AI policy and AI governance?
An AI policy tells staff what they can and can't do. AI governance includes the policy, but also the processes that make the policy real: how you monitor compliance, how you respond to incidents, how you update the policy as tools change, and how you demonstrate to clients and regulators that you're meeting your obligations. For most small firms, the practical difference is: a policy is a document; governance is also making sure people read it, training staff, updating it when a new tool launches, and having an answer when a client asks 'how do you govern AI use?'
How long does it take to build a minimum viable AI governance framework?
For a 5-20 person firm: one afternoon to draft, one team meeting to implement. The minimum viable version has four components: approved tools list (30 minutes to write), client data handling rule (20 minutes), output review requirement (already built into any competent professional's workflow — you're just making it explicit), and a disclosure statement (15 minutes). Total: under two hours. The governance gap in most small firms is not complexity — it's that no one has put in the two hours.