Your AI Policy Can Fit on One Page — Here's What Goes in It

Published February 19, 2026 · By The Crossing Report

Published: March 15, 2026 | By: The Crossing Report | 8 min read


Summary

Most small professional services firms are using AI. Most have no written policy governing it. That gap is a professional liability exposure — not a best-practices aspiration. Here's the minimum viable AI policy your firm can write in an afternoon: four specific elements that fit on a single page and close the most urgent risks.


The Gap in Numbers

The 2026 SMB AI Workplace Study from business.com found that 68% of small businesses now use AI tools in their operations. The 8am Legal 2026 report puts AI use in law specifically at 70% — with only 9% having a policy that actually covers it. The Compliance Week/konaAI survey confirms the same pattern across sectors: 83% of organizations use AI, 25% have governance in place.

That means roughly 3 out of 4 professional services firms using AI are doing so without a documented framework for who can use what, with whose data, with what review requirement, and with what happens if something goes wrong.

For most industries, this is a operational gap. For professional services firms, it's a liability exposure.


Why This Is Different for Your Firm

When a paralegal at a law firm uses an unapproved AI tool to draft a response letter using details from a client's matter — and the output contains a material error — the bar complaint is filed against the supervising attorney, not the paralegal. The standard is not "did you approve this tool?" It's "did you maintain adequate supervision of the work product delivered to clients?"

When a staff accountant at a CPA firm uploads client financial statements to a consumer AI chatbot to generate a memo, and that chatbot's terms of service allow the provider to use inputs for model training — the firm may have violated client confidentiality obligations, regardless of whether anyone outside the firm ever saw the data.

When a recruiter at a staffing firm uses an AI tool to rank candidates — even a well-intentioned one they found on Product Hunt — they may be creating an illegal consumer report under the FCRA without the required disclosures, access, and dispute rights.

The professional services industries operate under confidentiality obligations, professional responsibility rules, and in some cases federal and state regulations that make AI governance a compliance matter, not just an internal best practice.

A written policy doesn't eliminate risk. It demonstrates that the firm exercised reasonable oversight — which is the relevant standard in a bar complaint, E&O claim, or regulatory audit.


The Four Elements

The minimum viable AI policy has four components. No compliance officer required. No outside counsel needed. An owner or managing partner can draft this in two to three hours.


Element 1: Approved Tools List

A short, explicit list of the AI tools staff are permitted to use in connection with client work — including tools that may be used with client data.

What goes here:

  • Name each approved tool: "Microsoft Copilot (M365 subscription)," "Clio Manage AI features," "August AI (law firms only)," "Fathom meeting notes," etc.
  • Specify the scope: "for drafting and document review," "for meeting summaries," "for client-facing communication only after partner review"
  • Add one explicit prohibition: "All other AI tools are prohibited for use with client information unless approved in writing by [partner/owner name]"

Why this matters: Your staff is currently using AI tools you don't know about. The approved tools list is not about restricting productivity — it's about knowing which tools are in contact with client data so you can audit, monitor, and control the quality of the output.

The tools not on the list are not "banned." They're simply not approved for client data. Staff can use any tool they want for their own purposes. The policy governs what touches client information.


Element 2: Client Data Handling Rule

A single clear statement about what "client data" means and what can't be done with it using unapproved tools.

Sample language:

"Client data includes all documents, financial records, communications, personally identifiable information, and matter details related to clients of [Firm Name]. Client data may not be entered into AI tools that are not on the Approved Tools List. If you are unsure whether a tool has been approved, do not use it with client data — ask [partner/owner name] first."

What counts as client data varies by firm type:

  • Law firms: Client matter descriptions, case documents, correspondence, legal research on a specific matter, identifiable client information
  • Accounting firms: Tax returns, financial statements, bookkeeping records, payroll data, identifiable financial information
  • Consulting firms: Project documents containing client confidential information, competitive intelligence shared by clients, client operational data
  • Staffing firms: Candidate personal information, employer client data, compensation details, background check-adjacent information
  • Marketing agencies: Client brand materials, campaign performance data, proprietary strategy documents, client contact lists

The language above covers all of these. Make the definition specific to your firm's actual client work.


Element 3: Output Review Requirement

A statement that AI-generated content must be reviewed by a licensed professional or qualified staff member before it reaches clients.

Sample language:

"Any content, document, analysis, or communication generated or substantially assisted by AI — whether on an approved tool or otherwise — must be reviewed for accuracy, completeness, and appropriateness by [a licensed attorney / a CPA / a senior staff member] before it is sent to clients, filed with a court, submitted to a regulator, or included in client deliverables. The reviewer is responsible for the accuracy of the final work product."

The compliance basis for law firms: ABA Formal Opinion 512 requires competent oversight of AI-generated work product. "The AI said it" is not a defense against a bar complaint. The review requirement in your policy is how you document compliance with that standard.

The compliance basis for accounting firms: AICPA quality management standards apply the same logic. You remain professionally responsible for the accuracy of work product delivered under your firm's name, regardless of how it was produced.

Why you need the last sentence: "The reviewer is responsible for the accuracy of the final work product" closes the gap. Without it, staff may believe that reviewing AI output is optional if they "trust" the tool. With it, the professional accountability is explicit.


Element 4: Incident Reporting Line

One sentence. Who do staff contact if they accidentally use an unapproved tool with client data?

Sample language:

"If you accidentally enter client data into an unapproved AI tool, notify [partner/owner name] at [contact info] immediately. Do not attempt to resolve the situation yourself."

Why this matters: Incidents happen. A staff member who panics and says nothing about accidentally uploading client documents to the wrong AI tool creates a much worse liability situation than one who reports it immediately and lets you assess whether there's a notification obligation.

Some state data privacy laws require prompt notification when client data is improperly disclosed to a third party. A consumer AI tool with terms that allow use of inputs for training may qualify as such a disclosure. You cannot assess that obligation if you don't know the incident occurred.

The reporting line also sends a cultural signal: the policy is about protecting the firm and clients, not punishing staff for honest mistakes.


The One-Page Format

Take the four elements above. Write them in plain language specific to your firm. Add your firm name, the effective date, and the partner or owner responsible for policy questions.

That document is your AI policy.

It does not need to be long. It does not need legal review to be effective as a governance document. It needs to be written down and distributed to staff — which is the step 75% of professional services firms using AI have not taken.


Rolling It Out

A policy that lives in a folder is less useful than one staff have actually read. Three steps:

  1. Email it to every staff member with a brief note: "We've written down how we handle AI tools at the firm. Please read this and confirm you've received it. Questions go to [name]." Ask for a reply confirmation. That's your distribution record.

  2. Add it to your onboarding materials. Every new hire should receive it on Day 1, alongside the client confidentiality agreement.

  3. Review it every six months. The AI tools landscape is changing fast. An approved tools list written in March 2026 may need updates by September. Put a calendar reminder to review and update.


The Governance Gap Is Closing — Slowly

The Thomson Reuters 2026 AI in Professional Services Report found that 40% of professional services firms now have org-level AI adoption. Among those, only 60% have a governance policy in place — meaning a significant portion of firms that actively use AI are still operating without documented governance.

The firms that close this gap first aren't ahead because they have better AI tools. They're ahead because when something goes wrong — an AI error, a data incident, a client complaint about an AI-generated document — they have documentation showing they exercised reasonable professional oversight.

That documentation starts with four elements on one page.


The Action Item This Week

Draft the policy. Schedule two hours. Use the four elements above as your template. Substitute your firm's specific language, tools, and contact names.

Send it to staff before the end of the week.

That's the work.


The Crossing Report helps professional services firm owners navigate the AI transition — with specific, actionable guidance for accounting, law, consulting, staffing, and marketing agency owners. Subscribe here for weekly field reports on what's changing and what to do next.


Related reading:

Frequently Asked Questions

Why does a professional services firm need a written AI policy?

When a paralegal uses an unapproved AI tool with client documents, the professional liability is the firm's — not the employee's. When an accountant uses ChatGPT to draft a tax memo and the output is wrong, the E&O claim lands on the partner who signed off. A written AI policy does three things: (1) defines which tools staff can use with client data, closing the shadow AI liability gap; (2) sets a review requirement before any AI-generated content reaches clients; (3) creates documentation that shows the firm exercised reasonable oversight — important in a bar complaint, malpractice defense, or regulatory audit. The policy is not bureaucracy. It's your professional liability defense in written form.

What are the four elements of a minimum viable AI policy for a small professional services firm?

The four required elements are: (1) Approved tools list — a short list of AI tools staff are permitted to use with client data, and a clear prohibition on all others for client work; (2) Client data handling rule — a specific statement that client data (documents, financial data, communications, identifiable personal information) may not be entered into unapproved AI tools; (3) Output review requirement — a statement that AI-generated content must be reviewed by a licensed professional or qualified staff member before it is sent to clients, filed with a court, submitted to a regulator, or included in deliverables; (4) Incident reporting line — a single sentence telling staff who to notify if they accidentally use an unapproved tool with client data. Those four elements fit on a single page and can be drafted in an afternoon.

How many small businesses are using AI without a written policy?

Multiple 2026 surveys confirm the same gap: the 2026 SMB AI Workplace Study (business.com) found 68% of small businesses use AI, but most have no documented governance. The 8am Legal 2026 report puts the governance gap in the legal profession specifically at 70% AI use versus 9% with a policy that actually covers AI use. The Compliance Week/konaAI survey found 83% of organizations use AI, but only 25% have implemented a strong governance framework. These numbers are consistent across sectors: most professional services firms are in the 68-83% using AI, and most are NOT in the 25% with governance in place.

What is 'shadow AI' and why is it a liability risk for professional services firms?

Shadow AI refers to AI tools that staff use for work without the firm's knowledge or approval — typically tools they use personally (ChatGPT personal account, Claude.ai free tier, consumer apps with AI features) that they've started applying to client work. Shadow AI is a liability risk because: (1) most consumer AI tools have terms of service that allow the provider to use input data for training purposes — if a staff member enters client financial statements or a client's legal matter description into such a tool, the firm may have violated confidentiality obligations; (2) the firm cannot audit, monitor, or control the quality of outputs from tools it doesn't know are in use; (3) if an error from a shadow AI tool causes client harm, 'we didn't know our paralegal was using it' is not a viable defense — it's evidence of inadequate supervision. A written AI policy with an approved tools list is the primary mechanism for closing the shadow AI gap.

Does a law firm need a different AI policy than an accounting firm?

The four elements are the same. The specifics differ. For a law firm, the approved tools list should address attorney-client privilege (most AI tools hosted in the cloud are not protected; privilege may not attach to communications entered into third-party AI systems), and the client data prohibition should specifically cover client matter information, documents, and case details. For an accounting firm, the prohibition should specifically cover client financial data, tax documents, and personally identifiable financial information (which may also trigger state data privacy requirements). For staffing firms, the prohibition should cover candidate personal data and any AI tools used in employment decisions (which may trigger Colorado AI Act, FCRA, or state hiring AI rules). In all cases, the structure is the same — the specific language reflects what client data your firm actually works with.

Get the weekly briefing

AI adoption intelligence for accounting, law, and consulting firms. Free to start.

Free weekly digest. No spam. Unsubscribe anytime.