92% of Lawyers Use AI. Only 43% of Small Firms Have Any Policy. Here's the 30-Minute Fix.
Published March 17, 2026 · By The Crossing Report
The numbers are stark. According to Wolters Kluwer's Future Ready Lawyer 2026 report, 92% of legal professionals now use at least one AI tool in their work. The Clio 2026 Legal Trends Report found that 60% of mid-sized law firms have formal AI governance policies in place. The 8am Legal Industry Report 2026 found that among small law firms, only 43% have any AI governance policy at all — and no plans to create one.
Do the math. The vast majority of small law firms have AI running in their practices right now, without rules about how it can be used, what data can go into it, or who reviews what it produces.
That gap is no longer just a best practice question. It is a bar complaint waiting to happen.
Why the Clock Is Running
The moment you have an attorney using ChatGPT to draft a client letter, an associate running a contract through Claude to flag issues, or a paralegal summarizing deposition transcripts with any AI tool — you have a governance obligation. ABA Formal Opinion 512 (2023) makes clear that competent use of AI is now a professional responsibility standard. "Competent" means you understand what the tool does, how it handles client data, and how you verify its outputs. "Incompetent" means you handed it work without any of that, and you're on the hook for the results.
Three specific risks close fast once AI is in use without policy:
1. Confidentiality breach under Model Rule 1.6. If client information — names, matter details, privileged communications — goes into a public AI tool (ChatGPT, Gemini in a personal account, Claude.ai), that's potential exposure. Public consumer tools may use inputs for model training. They don't offer the data processing protections that matter for client confidentiality. One attorney taking a shortcut with client PII in a consumer tool creates a breach your policy should have prevented.
2. Supervision liability under Rules 5.1 and 5.3. As the supervising attorney or firm owner, you're responsible for the work product of the attorneys and staff you supervise — including AI-assisted work product you never knew they were generating. No policy means no approval process, no tool list, and no mandatory human review requirement. That's not a defense — that's an aggravating factor.
3. Disclosure obligation failures. More than 20 federal districts now have standing orders requiring disclosure of AI use in court filings. Several state bars have issued guidance requiring disclosure. Attorneys who don't know their firm's AI use can't make the required disclosures. The sanctions are accumulating — and "I didn't know my associate was using AI" is not working as a defense.
The mid-sized firms are ahead of you here. They built governance frameworks because they had to — the risk was too visible to ignore. Small firms face the same legal exposure with fewer resources and typically no one whose job it is to manage it.
Here's how to fix it in 30 minutes.
The 30-Minute Governance Fix
You don't need a 40-page AI policy. You need a one-page policy that attorneys actually follow, covering four things.
Step 1: Approve a Tool List (10 minutes)
Create a short list of approved AI tools and a clear prohibition on unapproved tools with client data.
Approved category: Legal-specific tools with Business Associate Agreements or Data Processing Agreements. These include Clio Duo, Westlaw AI, CoCounsel, Spellbook, DescrybeLM, and Litera Lito, among others. These tools contractually commit to keeping your client data in secure environments and don't use your inputs for training.
Prohibited with client data: Public consumer AI tools — ChatGPT in a personal account, Claude.ai without a Teams or API plan with data protections, Gemini in a personal Google account. These tools are fine for general research, drafting templates with no client-specific information, or learning. They are not appropriate for anything touching client matter content.
The bright line: no client names, matter details, case facts, or privileged communications in public AI tools. Period. Write that line in the policy. Say it out loud to your team.
Step 2: Set a Mandatory Review Requirement (5 minutes)
Every piece of AI-generated work that reaches a client, a court, or opposing counsel must be reviewed by an attorney before it goes out. Not reviewed-and-rubber-stamped — reviewed for accuracy, appropriate legal reasoning, and any hallucinated citations or mischaracterized facts.
This is the "treat AI like a capable junior associate" rule. You wouldn't send out an untouched associate first draft. Same standard applies.
Write it this way: "Any AI-generated content included in client communications, court filings, contracts, or other legal work product must be reviewed, verified, and approved by a supervising attorney before delivery or filing."
Step 3: Handle Disclosure (5 minutes)
Check your jurisdiction's current AI disclosure requirements. If you practice in federal court, check each district's standing orders — more than 20 now require disclosure of AI use in filings. If you practice in state court, check your state bar's current guidance.
Write one line in your policy: "Attorneys are responsible for identifying and complying with all applicable AI disclosure requirements in each matter, jurisdiction, and court where they practice."
Post the relevant standing orders or bar guidance links somewhere your team can find them.
Step 4: Create the Incident Response Line (10 minutes)
What happens when someone uses an unapproved tool with client data? You need one sentence in the policy and one person to call.
"Any potential breach of this policy — including use of unapproved AI tools with client data — must be reported immediately to [name] for evaluation of our obligation to notify the client under Model Rule 1.4 and applicable state breach notification law."
That's it. Designate the person. Write the sentence. If you're a solo, you're the person.
What This Policy Does (and Doesn't Do)
This 30-minute fix closes the most acute exposure: confidentiality breaches from improper tool use, supervision failures for unapproved AI use, and disclosure obligation gaps.
It does not solve every AI governance question your firm will face over the next five years. It doesn't create a training program, establish a process for evaluating new AI tools, or address AI's role in billing or fee structure — those are legitimate next steps that take more time.
But 43% of small law firms have nothing. Getting from nothing to a one-page policy that your attorneys actually follow puts you in a materially better legal and ethical position before any incident occurs.
The mid-sized firms with 60% governance adoption didn't start with comprehensive frameworks. They started with a tool list and a mandatory review step. That's where you start too.
What to Do This Week
- Open a blank document. Your firm's AI policy is 4 paragraphs: approved tools, prohibited tools, mandatory review requirement, incident reporting. Write it today.
- Set the tool list. Spend 15 minutes identifying which AI tools your attorneys and staff actually use. Which are legal-specific with data protections? Which are public consumer tools? The list tells you where the exposure is.
- Send it to your team. One email. One policy document attached. Ask each attorney to confirm they've read it. That confirmation matters if something goes wrong.
- Check your disclosure obligations. Look up your district's standing orders and your state bar's current AI guidance. Link to both in the policy document. Update when they change.
The 92% who use AI are running ahead. The question is whether your governance is running with them or falling behind.
Sources: Wolters Kluwer Future Ready Lawyer 2026 Webinar Series — Scaling AI | Clio — AI Is Reshaping How Mid-Sized Law Firms Scale (2026) | 8am Legal Industry Report 2026 | ABA Formal Opinion 512 on Generative AI
Frequently Asked Questions
Does my small law firm actually need an AI policy in 2026?
Yes — and the data suggests you almost certainly don't have one. The 8am Legal Industry Report 2026 found that 43% of small law firms have no AI governance policy and no plans to create one, even as 92% of legal professionals report using at least one AI tool (Wolters Kluwer Future Ready Lawyer 2026). If AI is running in your firm without a policy, bar complaints and client data incidents become significantly harder to defend. The question isn't whether you need one — it's how quickly you can put a functional one in place.
What's the minimum an AI policy for a small law firm needs to cover?
A functional small law firm AI policy needs four elements: (1) which AI tools are approved and which are prohibited; (2) what data can and cannot be entered into AI systems — specifically, no client PII or matter-specific content in public AI tools like ChatGPT; (3) a mandatory human review requirement before any AI-generated content reaches a client or court; and (4) how the firm will handle a disclosure obligation if a court or jurisdiction requires it. This doesn't need to be a 20-page document. A one-page policy that attorneys actually follow is more protective than a binder nobody reads.
What's the bar exposure risk if my firm uses AI without a policy?
The exposure has three vectors. First, data security: if client information is entered into a public AI tool (ChatGPT, Claude.ai, Gemini in a personal account) without data processing protections, that's a potential confidentiality breach under Model Rule 1.6. Second, competence: ABA Formal Opinion 512 establishes that competent use of AI is now a professional responsibility obligation — which means incompetent use is a bar complaint. Third, supervising attorney responsibility: if you have associates using AI tools without your oversight or approval, you may be responsible for their outputs under Model Rules 5.1 and 5.3.
Why do mid-sized firms have AI governance policies but small firms don't?
Scale and incentive structure. The Clio 2026 Legal Trends Report found 60% of mid-sized law firms have formal AI governance policies — more than double the rate at small firms. Mid-sized firms typically have a managing partner or operations director whose job includes risk management. In a 3-5 attorney small firm, policy creation lands on the owner's already-overloaded plate and keeps getting deferred. The risk gap is real, but so is the fix — it doesn't take a general counsel to draft a workable small firm AI policy.
What AI tools are safest for a small law firm to use in 2026?
The safest category is legal-specific AI tools with Business Associate Agreement (BAA) or Data Processing Agreement (DPA) provisions — Clio Duo, Westlaw AI, CoCounsel, Spellbook, DescrybeLM, and Litera Lito, among others. These tools contractually commit to keeping your client data within secure environments. The highest-risk category is public consumer AI tools (ChatGPT, Claude.ai, Gemini in personal accounts) used with client-specific information — these tools may use inputs for training and don't offer data protection commitments. A firm policy that draws this line clearly is the core of what you need.