The Crossing Report — Special Edition

How to Write an AI Policy for a Professional Services Firm (2026)

Updated March 2026 · By Martin Adey · 15 min read

Summary

An AI policy for a professional services firm is a written document that tells your staff what AI tools they can use, what client data they can put into those tools, and what human review is required before any AI output goes to a client. You need one now because ABA Opinion 512 requires law firms to document AI supervision standards, AICPA guidance places data confidentiality obligations on CPAs using AI, FTC March 2026 guidance warns that AI outputs without human review may constitute deceptive trade practices, and Colorado's CPAIA takes effect June 30, 2026 with $20,000-per-violation penalties. A minimum viable AI policy covers five things: approved tools, prohibited data types, required human review, client disclosure position, and a named compliance owner. This guide gives you the full framework and a copy-paste template you can adapt in an afternoon.

Why Every Professional Services Firm Needs an AI Policy Now

Three years ago, an AI policy was a large-firm concern. Today, the risk lands on firms of every size — because your staff is already using AI whether you have a policy or not.

The average knowledge worker in professional services now uses between two and five AI tools per week. Most of those tools are free-tier or personal subscriptions that offer no enterprise data protection. When a paralegal pastes a client's privileged deposition summary into ChatGPT to speed up a memo, that data may be used to train the model — and there's nothing preventing that unless your firm has a documented rule against it.

This is what regulators are responding to.

For Law Firms

ABA Formal Opinion 512 (2024) is the governing standard. It requires lawyers to understand AI tools well enough to use them competently, protect client confidentiality when using AI, and supervise non-lawyer use of AI in the same way lawyers supervise other non-lawyer work. A law firm that lets associates use any AI tool they want, with no oversight, is likely out of compliance with Opinion 512 today.

For Accounting Firms

The AICPA's 2025 guidance on AI use by CPAs makes clear that client data confidentiality obligations extend to AI tools. A CPA who inputs a client's unreported income, SSN, or financial statements into a non-enterprise AI tool without a data processing agreement is taking on personal liability exposure. The AICPA's guidance assumes the kind of documented oversight that only a policy can create.

For All Firms

The FTC's March 2026 guidance on AI and deceptive practices explicitly flags professional services contexts. If your firm uses AI to generate client analysis, recommendations, or reports and doesn't disclose that — or doesn't apply a meaningful human review — the FTC considers that a potential deceptive practice.

The Shadow AI Problem

The biggest risk isn't the AI tools you've approved. It's the ones you haven't — the ones your staff downloaded last Tuesday because they heard about them at a conference. A written policy with a clear approved/prohibited list closes that gap. Without it, you have no grounds to say anything to a staff member who's been pasting client financials into a free consumer AI tool for six months.

What Goes Into a Professional Services AI Policy

Every firm's AI policy needs these five elements, regardless of firm type:

1. Approved Tool List with Account Tier Requirements

Name the specific tools your firm allows and specify which account tier is required for client-related work. The difference matters: a personal ChatGPT account offers no data protection; a ChatGPT Enterprise or Teams account includes data processing agreements and excludes your data from training.

The minimum bar for any AI tool used with client data: a Business Associate Agreement (BAA) for healthcare contexts, a Data Processing Addendum (DPA) for EU-adjacent work, and an enterprise-tier subscription that explicitly excludes your inputs from model training.

2. Prohibited Data Types

Be explicit. Staff need a clear list, not a vague “use good judgment.”

Categorically prohibited in any AI tool (consumer or enterprise):

  • Social security numbers and EINs
  • Privileged attorney-client communications (pre-privilege attachment)
  • Unpublished financial statements or unreported income figures
  • Non-anonymized client identifying information in free-tier tools
  • Information covered by an active NDA without the counterparty's written consent

3. Human Review Requirements Before Client Delivery

No AI output goes to a client without a qualified human reviewing it. This is non-negotiable in professional services contexts where you have a duty of care. Specify who must review (partner/principal level?), what the review must cover (accuracy of facts, completeness, appropriateness for this client situation), and whether the output must be documented as reviewed before delivery.

4. Client Disclosure Position

Does your firm disclose AI use to clients proactively, reactively (only when asked), or not at all? For law firms, ABA Opinion 512 and some state bar rules require disclosure of AI use in certain contexts. For all firms, the FTC March 2026 guidance makes non-disclosure of AI-generated work a potential deception issue. Your policy needs to take a clear position.

Recommended default

Proactive disclosure in engagement letters, reactive detailed disclosure upon client request.

5. Compliance Owner Designation

Name a person who owns AI compliance at your firm. This doesn't have to be a new role — it's typically the managing partner, COO, or senior operations person. Their job is to approve new tools before adoption, review the policy annually, and be the point of contact if a client or regulator asks about your AI practices.

Law Firm AI Policy: Specific Requirements

Law firms face the most specific AI policy obligations of any professional services firm type, because ABA Opinion 512 and emerging state bar guidance create documented obligations.

The ABA 512 Checklist for Law Firms

  • Competence:Before using any AI tool, the supervising attorney must understand how it works well enough to evaluate its output. Relying on AI you don't understand is a competence issue.
  • Confidentiality:Client information cannot be input into any AI tool unless the client has consented after full disclosure, or the tool has a formal data protection agreement covering your firm's use.
  • Supervision:Non-lawyer AI use must be supervised the same way any non-lawyer work is supervised — meaning attorney review before client delivery, not just a spot check.
  • Candor/Fees:AI-assisted work billed at full attorney rates without disclosure may be a fee transparency issue in some jurisdictions. Check your state bar's specific guidance.

Engagement Letter AI Disclosure Clause (sample — adapt before use):

“Our firm may use AI-assisted drafting, research, and analysis tools in the course of providing legal services. All AI-generated work product is reviewed and verified by a licensed attorney before delivery. We use only enterprise-tier AI tools with data protection agreements that exclude client information from model training. If you have questions about our AI practices or wish to opt out of AI-assisted work on your matter, please contact us before we begin work.”

See our related guide: AI Engagement Letter Compliance for Law Firms in 2026

Accounting Firm AI Policy: Specific Requirements

CPAs operate under confidentiality obligations that create specific risks when AI is involved — and the stakes run higher than most accountants realize.

AICPA AI Guidance (2025) Key Points

  • Client data entered into AI tools is still subject to the confidentiality provisions of the AICPA Code of Professional Conduct
  • Using AI to prepare tax returns or financial analyses does not transfer professional responsibility to the AI — the CPA remains fully liable
  • Quality control procedures must extend to AI outputs, not just human-prepared work

Data Types Requiring Special Handling

  • SSNs and EINs:Never enter into any AI tool that lacks an explicit data processing agreement. This includes free tiers of ChatGPT, Claude, Gemini, and others.
  • Pre-filing data:Entering pre-filing financial data into a non-enterprise tool creates an undocumented data trail that could surface in an audit or dispute.
  • Full returns:Full return data (Schedule K, foreign income, asset details) should only be processed in purpose-built, compliant tax AI tools — not general-purpose models.

Sample Client Disclosure for Tax Clients:

“We use AI-assisted tools in tax preparation and financial analysis. All AI outputs are reviewed by a licensed CPA before delivery. Client data is only input into enterprise-tier tools with formal data protection agreements. Your information is never used to train AI models.”

Consulting and Staffing Firm AI Policy: Specific Requirements

Consulting and staffing firms face two compliance risks that other professional services firms largely don't: NDA intersections and employment law exposure.

Client NDA Intersections

If your consulting firm operates under a non-disclosure agreement with a client — which most do — that NDA almost certainly covers confidential strategic information, unreleased financial data, and proprietary methodologies. Using AI to process that information without verifying your AI tool's data handling doesn't violate the NDA is a breach risk.

FCRA Compliance for Staffing Firms Using AI in Hiring

The Fair Credit Reporting Act (FCRA) applies to staffing firms using AI tools that compile, summarize, or score information about candidates. If your AI tool summarizes a candidate's background based on LinkedIn, professional records, or third-party data, and you use that summary to make a hiring or placement decision, you may have triggered FCRA's consumer reporting disclosure requirements. Illinois HB 3773 (live as of January 1, 2026) independently requires disclosure to candidates and employees when AI influences employment decisions.

One-Page AI Policy Template (Copy and Adapt)

The following is a minimum viable AI policy for a professional services firm. It is intentionally concise — one page that your staff will actually read. Add your firm-specific section from above.

[FIRM NAME] AI Use Policy

Effective: [DATE] | Review Date: [DATE + 6 MONTHS] | Compliance Owner: [NAME/TITLE]

Purpose

This policy governs how [Firm Name] staff use artificial intelligence (AI) tools in client work and internal operations. It protects client confidentiality, ensures the quality of our work, and keeps us in compliance with professional obligations.

Approved Tools

Staff may use the following AI tools for client-related work:

ToolApproved TierApproved For
[e.g., ChatGPT Teams]EnterpriseDrafting, summarization
[e.g., Microsoft Copilot]EnterpriseDocument analysis, internal work
[e.g., Harvey AI]Enterprise[Law firms: legal research, drafting]

Personal or free-tier AI tools may NOT be used for any client-related work.

Requests to add new tools go to [Compliance Owner]. New tools must be approved before use.

Prohibited Data in AI Tools

Do not input the following into any AI tool — even approved enterprise tools — without written approval from the Compliance Owner:

  • • Client social security numbers, EINs, or tax identification numbers
  • • Privileged communications (attorney-client or subject to another professional privilege)
  • • Information subject to an active NDA with a third party
  • • Unpublished financial data (earnings, transactions, unreported positions)
  • • Any data that identifies a specific individual by name in combination with financial, health, or legal information

Human Review Requirement

No AI-generated output is delivered to a client without review by a qualified professional at [Firm Name]. “Reviewed” means the reviewing professional has verified accuracy and appropriateness for this specific client situation — not just spell-checked the output.

Client Disclosure

We disclose AI use to clients in our engagement letters. If a client asks directly whether AI was used on their work, the answer is honest and complete. We do not misrepresent AI-generated work as entirely human-produced.

Violations

Using unapproved AI tools for client work or violating this policy's data restrictions will result in a corrective conversation and may result in formal disciplinary action.

Document version: 1.0 | Next review: [DATE]

What to Do This Week

If your firm doesn't have an AI policy yet, here's your 90-minute path to having one:

  1. 1.Pull the template above. Open it in a Google Doc or Word file right now.
  2. 2.Fill in the tool list. What AI tools are people using at your firm? You may need to ask staff — be ready to hear tools you didn't know about.
  3. 3.Identify your account tiers. For each tool on the list, check: is it a personal account, a business account, or an enterprise account? Any personal accounts being used for client work need to be upgraded or removed.
  4. 4.Add your firm-type overlay. Use the relevant section from this guide (law, accounting, consulting/staffing).
  5. 5.Name a compliance owner. If you're the managing partner, it's probably you until someone else is ready to own it.
  6. 6.Send it to staff at your next team meeting. A policy nobody knows about doesn't protect anyone.

If your firm already has an AI policy, check it against two things: Does it cover shadow AI (tools not on your approved list)? Does it address Colorado's CPAIA if you have any clients there?

Premium Content

AI Policy Template Package — All Four Firm Types

Premium subscribers get the complete AI Policy Template Package: firm-specific policy templates for law firms, accounting firms, consulting firms, and staffing agencies; engagement letter AI clauses; a compliance checklist mapping to ABA Opinion 512, AICPA guidance, FTC March 2026 guidance, and Colorado CPAIA; and a policy review checklist for annual updates.

Free weekly digest. No spam. Unsubscribe anytime.

$19/month · Cancel anytime · First issue free

Frequently Asked Questions About AI Policies for Firms

Q: Does a small firm really need a written AI policy?

A: Yes — and especially a small firm. Large firms have compliance departments and general counsel to catch problems. A 10-person accounting or law firm doesn't. If a staff member inputs client tax data into a free AI tool, there's no enterprise data agreement protecting that data. A written policy — even one page — creates the guardrails that prevent that from happening. ABA Opinion 512 and AICPA AI guidance both assume competent supervision of AI tools, which requires documented standards.

Q: How often should an AI policy be updated?

A: At minimum, every six months. The AI tool landscape is changing fast enough that a policy written in January 2025 may not cover tools your staff adopted by July 2025. Build in a scheduled review — tie it to a date you already own, like a year-end planning session or mid-year team meeting. When a major new regulation lands (like Colorado's CPAIA June 2026 deadline), review immediately.

Q: What happens if staff use unapproved AI tools?

A: Absent a policy, probably nothing formal — but the data exposure risk is real and the liability follows the firm, not the employee. With a policy in place, you have clear grounds for a corrective conversation, and you can demonstrate good-faith compliance efforts if a client or regulator ever asks. The policy converts an ambiguous data security incident into a documented process violation you can actually manage.

Q: Does our AI policy need to address Colorado's AI Act?

A: If any of your clients, candidates, or customers are in Colorado, yes. Colorado's Consumer Protections for Artificial Intelligence Act (CPAIA) takes effect June 30, 2026 and requires firms deploying high-risk AI systems to document human oversight, maintain records, and provide disclosures. A firm AI policy that identifies your tools and designates a compliance owner is the foundation of CPAIA compliance. See our full guide to the June 30, 2026 AI compliance deadline.

Q: Can we use one policy for all firm types?

A: The core framework can be shared — approved tools, prohibited data types, human review requirements, and compliance ownership are universal. The firm-specific sections need to be tailored. A law firm's policy must address ABA 512 (competence, confidentiality, supervision) and include engagement letter language. An accounting firm's must address AICPA guidance and IRS e-file security. A staffing firm's must address FCRA. The template in this guide includes all four firm-type overlays so you can pull only what applies.

Sources & Further Reading

  • American Bar Association — Formal Opinion 512 (2024): Generative AI tools, competence, and professional responsibility for lawyers
  • AICPA — AI use guidance for CPAs (2025): confidentiality obligations, quality control, and professional responsibility
  • Federal Trade Commission — Policy guidance on AI and consumer protection law (March 2026)
  • Colorado General Assembly — Consumer Protections for Artificial Intelligence Act (CPAIA), effective June 30, 2026

Related Reading

Get weekly AI intelligence for professional services firms — free.

Every week: the one AI development that matters most to accounting, law, and consulting firm owners — with specific next steps for your kind of firm.

Free weekly digest. No spam. Unsubscribe anytime.