How to Write an AI Policy for a Professional Services Firm (2026)

Published March 16, 2026 · Updated March 2026 · By The Crossing Report · 13 min read

Summary

An AI policy for a professional services firm is a written document that tells your staff what AI tools they can use, what client data they can put into those tools, and what human review is required before any AI output goes to a client. You need one now because:

  • ABA Opinion 512 (2024) requires law firms to competently supervise AI use — which courts and bar associations are beginning to interpret as requiring documented standards
  • AICPA AI guidance (2025) places data confidentiality obligations on CPAs using AI with client tax and financial data
  • FTC March 2026 guidance warns all professional services firms that using AI outputs without human review in client-facing work may constitute deceptive trade practices
  • Colorado's CPAIA takes effect June 30, 2026, with $20,000-per-violation penalties for firms deploying high-risk AI without documented oversight processes

A minimum viable AI policy covers five things: approved tools, prohibited data types, required human review, client disclosure position, and a named compliance owner. This guide gives you the full framework and a copy-paste template you can adapt in an afternoon.


Why Every Professional Services Firm Needs an AI Policy Now

Three years ago, an AI policy was a large-firm concern. Today, the risk lands on firms of every size — because your staff is already using AI whether you have a policy or not.

The average knowledge worker in professional services now uses between two and five AI tools per week. Most of those tools are free-tier or personal subscriptions that offer no enterprise data protection. When a paralegal pastes a client's privileged deposition summary into ChatGPT to speed up a memo, that data may be used to train the model — and there's nothing preventing that unless your firm has a documented rule against it.

This is what regulators are responding to.

For law firms: ABA Formal Opinion 512 (2024) is the governing standard. It requires lawyers to understand AI tools well enough to use them competently, protect client confidentiality when using AI, and supervise non-lawyer use of AI in the same way lawyers supervise other non-lawyer work. A law firm that lets associates use any AI tool they want, with no oversight, is likely out of compliance with Opinion 512 today. Courts have already sanctioned attorneys for submitting AI-generated briefs that cited fabricated cases — and those attorneys didn't have AI policies.

For accounting firms: The AICPA's 2025 guidance on AI use by CPAs makes clear that client data confidentiality obligations extend to AI tools. A CPA who inputs a client's unreported income, SSN, or financial statements into a non-enterprise AI tool without a data processing agreement is taking on personal liability exposure. The AICPA's guidance doesn't mandate a written AI policy by name, but it assumes the kind of documented oversight that only a policy can create.

For all firms: The FTC's March 2026 guidance on AI and deceptive practices explicitly flags professional services contexts. If your firm uses AI to generate client analysis, recommendations, or reports and doesn't disclose that — or doesn't apply a meaningful human review — the FTC considers that a potential deceptive practice. This is new enforcement territory, but the guidance is clear.

The shadow AI problem: The biggest risk isn't the AI tools you've approved. It's the ones you haven't — the ones your staff downloaded last Tuesday because they heard about them at a conference. A written policy with a clear approved/prohibited list closes that gap. Without it, you have no grounds to say anything to a staff member who's been pasting client financials into a free consumer AI tool for six months.


What Goes Into a Professional Services AI Policy

Every firm's AI policy needs these five elements, regardless of firm type:

1. Approved Tool List with Account Tier Requirements

Name the specific tools your firm allows and specify which account tier is required for client-related work. The difference matters: a personal ChatGPT account offers no data protection; a ChatGPT Enterprise or Teams account includes data processing agreements and excludes your data from training.

The minimum bar for any AI tool used with client data: a Business Associate Agreement (BAA) for healthcare contexts, a Data Processing Addendum (DPA) for EU-adjacent work, and enterprise-tier subscription that explicitly excludes your inputs from model training.

2. Prohibited Data Types

Be explicit. Staff need a clear list, not a vague "use good judgment."

Categorically prohibited in any AI tool (consumer or enterprise):

  • Social security numbers and EINs
  • Privileged attorney-client communications (pre-privilege attachment)
  • Unpublished financial statements or unreported income figures
  • Non-anonymized client identifying information in free-tier tools
  • Information covered by an active NDA without the counterparty's written consent

3. Human Review Requirements Before Client Delivery

No AI output goes to a client without a qualified human reviewing it. This is non-negotiable in professional services contexts where you have a duty of care. Specify:

  • Who must review (partner/principal level? senior associate?)
  • What the review must cover (accuracy of facts, completeness, appropriateness for this specific client situation)
  • Whether the output must be documented as reviewed before delivery

4. Client Disclosure Position

Does your firm disclose AI use to clients proactively, reactively (only when asked), or not at all? For law firms, ABA Opinion 512 and some state bar rules require disclosure of AI use in certain contexts. For all firms, the FTC March 2026 guidance makes non-disclosure of AI-generated work a potential deception issue. Your policy needs to take a clear position.

Recommended default for most professional services firms: Proactive disclosure in engagement letters, reactive detailed disclosure upon client request.

5. Compliance Owner Designation

Name a person who owns AI compliance at your firm. This doesn't have to be a new role — it's typically the managing partner, COO, or senior operations person. Their job is to approve new tools before adoption, review the policy annually, and be the point of contact if a client or regulator asks about your AI practices.


Law Firm AI Policy: Specific Requirements

Law firms face the most specific AI policy obligations of any professional services firm type, because ABA Opinion 512 and emerging state bar guidance create documented obligations.

The ABA 512 Checklist for Law Firms:

  • Competence: Before using any AI tool, the supervising attorney must understand how it works well enough to evaluate its output. Relying on AI you don't understand is a competence issue.
  • Confidentiality: Client information cannot be input into any AI tool unless: (a) the client has consented after full disclosure, or (b) the tool has a formal data protection agreement covering your firm's use.
  • Supervision: Non-lawyer AI use (paralegals, assistants) must be supervised the same way any non-lawyer work is supervised — meaning attorney review before client delivery, not just a spot check.
  • Candor/Fees: AI-assisted work that is billed at full attorney rates without disclosure may be a fee transparency issue in some jurisdictions. Check your state bar's specific guidance.

Engagement Letter AI Disclosure Clause (sample — adapt before use):

"Our firm may use AI-assisted drafting, research, and analysis tools in the course of providing legal services. All AI-generated work product is reviewed and verified by a licensed attorney before delivery. We use only enterprise-tier AI tools with data protection agreements that exclude client information from model training. If you have questions about our AI practices or wish to opt out of AI-assisted work on your matter, please contact us before we begin work."

Conflicts check: Some AI research tools aggregate data across multiple client matters. Before using any AI tool that could surface information across matters, verify it creates logical separation between client workspaces. A tool that lets a paralegal accidentally see patterns from another client's file creates a conflicts exposure.

See our related guide: AI Engagement Letter Compliance for Law Firms in 2026


Accounting Firm AI Policy: Specific Requirements

CPAs operate under confidentiality obligations that create specific risks when AI is involved — and the stakes run higher than most accountants realize.

AICPA AI Guidance (2025) Key Points:

The AICPA's 2025 position on AI and the CPA's professional obligations makes clear that:

  • Client data entered into AI tools is still subject to the confidentiality provisions of the AICPA Code of Professional Conduct
  • Using AI to prepare tax returns or financial analyses does not transfer professional responsibility to the AI — the CPA remains fully liable
  • Quality control procedures must extend to AI outputs, not just human-prepared work

Data Types Requiring Special Handling:

  • SSNs and EINs: Never enter into any AI tool that lacks an explicit data processing agreement. This includes free tiers of ChatGPT, Claude, Gemini, and others.
  • Unreported income or financial positions: Entering pre-filing financial data into a non-enterprise tool creates an undocumented data trail that could surface in an audit or dispute.
  • Tax returns before filing: Full return data (Schedule K, foreign income, asset details) should only be processed in purpose-built, compliant tax AI tools — not general-purpose models.

IRS E-file Security Requirements:

The IRS Publication 4557 (Safeguarding Taxpayer Data) applies to all authorized e-file providers. AI tools used in the tax preparation workflow must be evaluated against these requirements. A firm using a non-compliant tool risks losing its ERO status.

Sample Client Disclosure for Tax Clients:

"We use AI-assisted tools in tax preparation and financial analysis. All AI outputs are reviewed by a licensed CPA before delivery. Client data is only input into enterprise-tier tools with formal data protection agreements. Your information is never used to train AI models."


Consulting and Staffing Firm AI Policy: Specific Requirements

Consulting and staffing firms face two compliance risks that other professional services firms largely don't: NDA intersections and employment law exposure.

Client NDA Intersections:

If your consulting firm operates under a non-disclosure agreement with a client — which most do — that NDA almost certainly covers confidential strategic information, unreleased financial data, and proprietary methodologies. Using AI to process that information without verifying your AI tool's data handling doesn't violate the NDA is a breach risk. Before inputting any NDA-covered material into an AI tool, verify the tool's terms explicitly address this, or get written consent from the counterparty.

FCRA Compliance for Staffing Firms Using AI in Hiring:

The Fair Credit Reporting Act (FCRA) applies to staffing firms using AI tools that compile, summarize, or score information about candidates. If your AI tool summarizes a candidate's background based on LinkedIn, professional records, or third-party data, and you use that summary to make a hiring or placement decision, you may have triggered FCRA's consumer reporting disclosure requirements. Illinois HB 3773 (live as of January 1, 2026) independently requires disclosure to candidates and employees when AI influences employment decisions.

Confidential Strategy and Financial Data:

Consulting firms often process confidential financial models, competitive intelligence, and M&A-sensitive information for clients. The bar for AI tool approval is higher here: enterprise data isolation (not just enterprise pricing) and contractual confirmation that your inputs are excluded from model training are the minimum.


One-Page AI Policy Template (Copy and Adapt)

The following is a minimum viable AI policy for a professional services firm. It is intentionally concise — one page that your staff will actually read. Add your firm-specific section from above.


[FIRM NAME] AI Use Policy Effective: [DATE] | Review Date: [DATE + 6 MONTHS] | Compliance Owner: [NAME/TITLE]


Purpose

This policy governs how [Firm Name] staff use artificial intelligence (AI) tools in client work and internal operations. It protects client confidentiality, ensures the quality of our work, and keeps us in compliance with professional obligations.

Approved Tools

Staff may use the following AI tools for client-related work:

Tool Approved Tier Approved For
[e.g., ChatGPT Teams] Enterprise Drafting, summarization
[e.g., Microsoft Copilot] Enterprise Document analysis, internal work
[e.g., Harvey AI] Enterprise [Law firms: legal research, drafting]

Personal or free-tier AI tools may NOT be used for any client-related work. This includes free ChatGPT accounts, Google Gemini (free), and any tool without a data processing agreement on file with the firm.

Requests to add new tools go to [Compliance Owner]. New tools must be approved before use.

Prohibited Data in AI Tools

Do not input the following into any AI tool — even approved enterprise tools — without written approval from the Compliance Owner:

  • Client social security numbers, EINs, or tax identification numbers
  • Privileged communications (attorney-client or subject to another professional privilege)
  • Information subject to an active NDA with a third party
  • Unpublished financial data (earnings, transactions, unreported positions)
  • Any data that identifies a specific individual by name in combination with financial, health, or legal information

Human Review Requirement

No AI-generated output is delivered to a client without review by a qualified professional at [Firm Name]. "Reviewed" means the reviewing professional has verified accuracy and appropriateness for this specific client situation — not just spell-checked the output.

Client Disclosure

We disclose AI use to clients in our engagement letters. If a client asks directly whether AI was used on their work, the answer is honest and complete. We do not misrepresent AI-generated work as entirely human-produced.

[FIRM TYPE] Compliance Overlay

[Insert applicable section from above: Law Firm / Accounting Firm / Consulting or Staffing Firm]

Violations

Using unapproved AI tools for client work or violating this policy's data restrictions will result in a corrective conversation and may result in formal disciplinary action. We take this seriously not because of bureaucracy, but because data breaches and professional liability claims hurt clients, the firm, and your professional license.

Policy Owner: [NAME] | Questions: [EMAIL]


Document version: 1.0 | Next review: [DATE]


What to Do This Week

If your firm doesn't have an AI policy yet, here's your 90-minute path to having one:

  1. Pull the template above. Open it in a Google Doc or Word file right now.
  2. Fill in the tool list. What AI tools are people using at your firm? You may need to ask staff — be ready to hear tools you didn't know about.
  3. Identify your account tiers. For each tool on the list, check: is it a personal account, a business account, or an enterprise account? Any personal accounts being used for client work need to be upgraded or removed.
  4. Add your firm-type overlay. Use the relevant section from this guide (law, accounting, consulting/staffing).
  5. Name a compliance owner. If you're the managing partner, it's probably you until someone else is ready to own it.
  6. Send it to staff at your next team meeting. A policy nobody knows about doesn't protect anyone.

If your firm already has an AI policy, check it against two things: Does it cover shadow AI (tools not on your approved list)? Does it address Colorado's CPAIA if you have any clients there?


Related Guides


The Crossing Report covers AI adoption for professional services firm owners. Subscribe for weekly field reports on what's working — and what isn't — at firms like yours.

This is the kind of intelligence premium subscribers get every week.

Deep analysis, cross-sector patterns, and the frameworks that help professional services firms make the crossing.

Related Reading

This is a sample issue — new ones go to subscribers

New issues of The Crossing Report ship exclusively to subscribers every week. Free in your inbox.