The Official Internal Controls Framework for AI Is Here — What COSO's New Guidance Means for CPA Firm Owners

Published March 17, 2026 · By The Crossing Report

The Official Internal Controls Framework for AI Is Here — What COSO's New Guidance Means for CPA Firm Owners

If you're a CPA firm owner, you learned COSO in your first audit training. The five-component framework — Control Environment, Risk Assessment, Control Activities, Information and Communication, Monitoring Activities — is the backbone of every internal controls engagement you've ever done.

In February 2026, COSO published formal guidance for governing generative AI systems. It's the first time the definitive internal controls framework has been applied directly to AI oversight — and it matters to your firm in ways that most of the noise around "AI governance" doesn't.

This isn't a vendor white paper. It isn't a bar association recommendation. It's COSO. When your clients ask how you govern AI in your practice, there's now a standard you can point to that they'll recognize.

Why This Guidance Arrived Now

COSO didn't publish AI guidance because AI is new. They published it because AI governance in professional services had no authoritative standard — and the gaps were showing up in audit work, client engagements, and insurance claims.

The Journal of Accountancy confirmed coverage of the guidance, framing it as a tool that "creates audit-ready guidance for governing generative AI" — specifically designed so that accounting firms can structure their own AI oversight posture and explain it to clients in terms that carry professional weight.

The profession needed this. Here's what it says.

The Five Components, Applied to AI

COSO's AI governance guidance maps directly onto the existing five-component framework. Here's the translation for a 5-20 person CPA firm:

1. Control Environment

Governance begins at the top. Tone at the top — the principle that says partners model the behavior they expect from staff — applies to AI just as it does to client billing integrity or time reporting.

What this looks like in practice: the firm's owners have a documented position on AI use. Not a paragraph in the employee handbook — a stated standard that partners model visibly. If partners use personal ChatGPT for client work while staff are told not to, the control environment has failed before it started.

2. Risk Assessment

This is where most small firms have the biggest gap, and where COSO is most specific.

AI risk assessment requires the firm to identify and evaluate: the AI tools currently in use (including tools staff use without formal authorization), the types of data those tools process, the vendors' data handling terms, the failure modes of AI outputs in your specific workflows, and the regulatory exposure if those failure modes occur.

For a 10-person CPA firm: this is a two-hour inventory exercise. Go tool by tool. Ask for each one: what data goes in, what terms govern that data, what happens if the output is wrong and it reaches a client? Document it. The risk assessment isn't the policy — it's the foundation for the policy.

3. Control Activities

Control activities are the specific procedures that prevent risk from materializing. COSO's AI guidance identifies three categories of control activities for accounting firms using generative AI:

  • Authorization controls: which AI tools are approved for which types of work, and who authorized each
  • Output review controls: a human review requirement before any AI-generated content is delivered to a client or filed with a regulatory body
  • Scope controls: defined boundaries on what AI tools may do in your workflows — specifically, where AI output becomes the input for client-facing work without additional professional judgment applied

The output review control is the one most small firms have the weakest documentation on. "We always review before sending" is not a control — it's an intention. A documented review requirement with a clear workflow checkpoint is a control.

4. Information and Communication

Your staff needs to know the AI governance policy exists, understand what it requires, and know how to raise a concern when something seems wrong.

COSO is explicit here: governance that isn't communicated isn't governance. The communication requirement has two directions — down to staff (here's the policy, here's what you're required to do) and up to firm leadership (here's what's happening, here's where we've had issues, here's what's changed in the AI tool landscape since we last reviewed).

For a small firm, annual communication is a minimum. Given how fast AI tools are evolving in 2026, semi-annual review is more defensible.

5. Monitoring Activities

Monitoring is how you know whether the other four components are actually working. COSO's AI monitoring guidance asks: are the approved tools still the tools being used? Are output reviews actually happening, or are they being skipped under deadline pressure? Have any AI tools changed their terms of service or data handling policies since you authorized them?

The monitoring frequency that COSO implies for AI is more active than for traditional internal controls, specifically because the AI tool environment changes faster than financial reporting standards do. A quarterly check of your AI vendor terms is now a reasonable governance standard.

The Client-Facing Angle You May Be Missing

Here's the piece most CPA firm owners haven't thought through yet: COSO's guidance doesn't just protect your firm. It gives you a credible professional answer when clients ask about your AI practices.

In 2026, more of your clients are asking. Enterprise clients with procurement processes are starting to include AI governance questions in vendor assessments. Clients who've read about AI errors in financial work are starting to ask their CPA firms directly. Clients undergoing their own COSO-based internal control reviews are noticing that their auditors haven't applied the same framework to themselves.

Being able to say "we've implemented COSO's five-component AI governance framework in our practice, and I can walk you through each component" is qualitatively different from "we're being careful." It's professional. It's auditable. And it demonstrates that you apply to your own operations the standards you use to evaluate your clients.

That's a competitive position, not just a compliance checkbox.

What to Do This Week

You don't need to build a formal governance program overnight. Here's a three-step sequence that gets you from zero to defensible:

Step 1: Inventory (Day 1) List every AI tool currently in use in your firm — including tools staff might be using informally. For each tool: what data does it process, who authorized it, and what are the vendor's data handling terms? This is your risk assessment foundation.

Step 2: Policy (Day 2-3) Write a one-page AI use policy that covers: approved tools list, client data handling rule (which tools can and cannot process client data), output review requirement (no AI-generated content to clients without licensed professional review), and a process for adding new tools to the approved list. You can do this in an afternoon.

Step 3: Communication (Day 4-5) Share the policy with your team. Explain why it exists. Answer their questions. Document that you did it. Schedule a quarterly check-in to review the approved tools list and verify terms haven't changed.

This sequence implements all five COSO components at a scale appropriate for a small firm. It's not a SOX compliance program. It's the minimum viable governance posture that lets you answer confidently when someone asks how you govern AI in your practice.


COSO published the standard. The profession has the framework. The only thing left is the two hours it takes to apply it.

Source: Journal of Accountancy — COSO creates audit-ready guidance for governing generative AI

Frequently Asked Questions

What is COSO and why does their AI guidance matter for CPA firms?

COSO — the Committee of Sponsoring Organizations of the Treadway Commission — is the organization that created the Internal Control — Integrated Framework, the standard that nearly every CPA in the US learned in school and that drives SOC 2, Sarbanes-Oxley, and most audit and internal control work. When COSO publishes guidance, it carries CPA-profession authority that no vendor white paper or bar association opinion can match. Their February 2026 guidance on governing generative AI is the first time the definitive internal controls framework has been applied specifically to AI oversight — making it the closest thing to an authoritative playbook for accounting firm AI governance that currently exists.

What does COSO's AI governance guidance actually say?

COSO applied its five-component framework — Control Environment, Risk Assessment, Control Activities, Information and Communication, and Monitoring Activities — to generative AI systems. Key positions: AI governance begins at the top (tone at the top applies to AI use, not just financial controls); risk assessment for AI must include model output risk, data privacy risk, and third-party AI vendor risk; control activities must include human review requirements for AI outputs before client delivery; monitoring must cover whether AI tools remain within approved use boundaries over time. The guidance is explicit that these aren't aspirational best practices — they're control requirements for firms that claim competency and integrity in their professional work.

Does COSO's guidance apply to small CPA firms, or only large ones?

COSO explicitly addresses scalability. The five-component framework scales down: a solo practitioner can implement all five components through personal discipline and documented process. A 10-person firm needs slightly more formality — a written policy, a designated partner responsible for AI oversight, and a basic monitoring process. The core requirements don't change based on firm size: AI must be governed, outputs must be reviewed, and the firm must be able to demonstrate competency in AI oversight when a client, regulator, or opposing counsel asks. The difference is that a Big Four firm needs a department; a 5-person firm needs an afternoon.

How does COSO's AI framework relate to SOC 2 or other compliance frameworks my clients care about?

COSO's framework is the foundation that SOC 2's trust service criteria build on. If your firm uses AI tools in any client-serving workflow and produces SOC reports, the COSO AI governance guidance is directly relevant to what your clients can expect you to control and document. More practically: if a client asks their auditor (you) how you govern AI in your own work, COSO gives you a credible, authoritative answer. 'We've implemented COSO's five-component AI governance framework' is a professional response. 'We're figuring it out as we go' is not.

What's the most urgent COSO AI governance action for a small CPA firm in 2026?

Risk Assessment — the second component. Most small firms are using AI tools that nobody has formally assessed for risk. The questions COSO requires you to be able to answer: What AI tools are in use in your firm right now? Who authorized their use? What types of client data do they process? What is the vendor's data handling policy? What happens to client data after it's processed? What oversight exists to catch wrong outputs before they reach clients? Most small accounting firm owners can't answer these questions for every AI tool their staff is currently using. The COSO risk assessment process is the mechanism for closing that gap — and the February 2026 guidance tells you exactly how to structure it.

Get the weekly briefing

AI adoption intelligence for accounting, law, and consulting firms. Free to start.

Free weekly digest. No spam. Unsubscribe anytime.