BigLaw Just Got AI-Certified — Here's the Small-Firm Version of That Credential

Published December 11, 2025 · By The Crossing Report

Published: March 14, 2026 | By: The Crossing Report | 6 min read


Summary

On March 9, 2026, K&L Gates became one of the first law firms globally to earn ISO/IEC 42001:2023 certification — the international standard for AI Management Systems. Small law firms don't need ISO 42001. But the certification signals what enterprise clients will eventually ask of their outside counsel, including the small regional and boutique firms they also use. Here's what the standard actually requires, and a four-element governance program any small firm can implement this week.


What K&L Gates Did and Why It Matters

K&L Gates is a global Am Law 100 law firm. On March 9, 2026, it earned ISO/IEC 42001:2023 certification for its AI Management System (AIMS).

ISO 42001 establishes a comprehensive framework for responsible AI governance. The certified standard covers:

  • Accountability: Who in the organization is responsible for AI decision-making, oversight, and escalation
  • Risk management: Identifying, assessing, and mitigating risks from AI use in client work
  • Ethics and transparency: Commitments to explainability, fairness, and honest communication about AI use
  • Data protection: Rules for what data may be processed through AI systems and under what safeguards
  • Regulatory compliance: Alignment with applicable AI regulations (EU AI Act, state laws, professional licensing requirements)

K&L Gates joins a small group of organizations globally that have achieved this certification. Legal IT Insider and Canadian Lawyer both covered the announcement — it's generating attention in legal trade press because it represents a meaningful step: a major law firm submitting its AI practices to external audit and certification.

Why does a small firm in Georgia or Alberta care what a 1,700-lawyer global firm did?

Because of what it signals about the direction of client expectations.


The 12-24 Month Lag

In professional services, compliance and governance requirements typically follow a predictable pattern: large firms adopt them first, often in response to enterprise client demands, and the requirements then trickle down to smaller regional and boutique firms through a 12-24 month lag as those clients extend similar expectations to all their outside counsel.

This happened with cybersecurity. Five years ago, only large firms had formal information security policies, penetration testing, and SOC 2 compliance. Today, corporate legal departments routinely ask their small outside law firms for cybersecurity documentation before engaging them.

ISO 42001 is likely to follow the same path. Not because every client will demand formal certification from their 8-person regional firm — but because the clients who work with K&L Gates today will expect some version of the governance substance from all their outside counsel in two to three years.

The four elements that ISO 42001 audits — AI use policy, output verification protocol, client disclosure language, data handling procedures — are exactly what a small firm needs to have in place anyway, regardless of certification.


The Small-Firm Version: Four Elements

You don't need a certification authority, a compliance consultant, or a multi-month implementation project. You need four documents.

Element 1: Approved AI Tools List

What: A list of the AI tools your firm permits for different categories of work, updated at least quarterly.

Minimum content:

  • Tool name and vendor
  • Approved use cases (e.g., "document drafting," "legal research," "client communication drafting")
  • Prohibited use cases (e.g., "uploading privileged client documents to tools without a data processing agreement")
  • Data classification (what types of client data may be processed through each tool)

Why this matters: ABA Formal Opinion 512 requires lawyers to maintain "reasonable understanding" of the AI tools they use. A documented approved tools list is evidence of that understanding. It also reduces shadow AI risk — staff using tools the firm hasn't evaluated.

Element 2: Output Verification Protocol

What: A clear standard requiring that AI-generated work product be reviewed and approved by a licensed attorney before delivery to a client or court.

Minimum content:

  • Statement that AI output is never delivered to clients without attorney review
  • Description of what "review" means for different output types (e.g., "substantive review and editing of drafted documents; verification of all citations in research output")
  • Designation of who is responsible for review in different contexts

Why this matters: This is the substance of professional responsibility. The professional who sends a document to a client is responsible for its accuracy, regardless of how the draft was generated. A documented protocol demonstrates that the firm takes this seriously — and is evidence of competence if there's ever a complaint.

Element 3: Client Disclosure Language

What: Standard engagement letter language disclosing AI use.

Template:

"Our firm uses AI-assisted tools in client service delivery, including document drafting, legal research, and communication drafting. All AI-assisted work product is reviewed and approved by a licensed attorney before delivery. Our AI use complies with ABA Formal Opinion 512 and applicable state bar guidance. If you have questions about our AI practices, please ask your attorney."

Why this matters: This is now a professional responsibility requirement in many jurisdictions, not a best practice. ABA Opinion 512 and several state bar ethics opinions require disclosure. Some courts now require disclosure of AI use in filings. Getting this language into your engagement letter now is simpler than retrofitting it after a complaint.

Element 4: Client Data Handling Rules

What: Documentation of which AI vendors have appropriate data protection agreements, and what client information may be sent to which tools.

Minimum content:

  • Which AI tools have a signed Data Processing Agreement (DPA) or equivalent terms appropriate for legal client data
  • Which tools are explicitly prohibited from receiving personally identifiable information, privileged communications, or protected health information
  • The standard for evaluating new AI tools before use with client data

Why this matters: Privilege and client confidentiality are the non-negotiable professional responsibilities. Sending privileged client communications to a vendor whose terms allow training on user data without a DPA is a professional responsibility problem, not just a privacy problem. Document your evaluation process.


Communicating Your Governance to Clients

The sophisticated clients who will eventually ask about your AI governance don't want a certification certificate — they want to know you've thought carefully about it.

The simplest communication approach:

On your website: Add a one-paragraph "AI at our firm" section to your methodology or process page. State that you have an AI use policy, that all AI output is reviewed by a licensed attorney before delivery, and that AI use is disclosed in every engagement letter.

In your engagement letter: Use the disclosure language above.

When clients ask: Be specific. "We use [tool names] for [specific tasks]. Every piece of AI-assisted work is reviewed by a licensed attorney. We have a written AI use policy that documents what tools we permit and what data each tool may access. We're happy to share it."


The Timeline

K&L Gates's certification was earned March 9, 2026. Trade press coverage will circulate through April. Corporate legal departments that work with K&L Gates will note it.

Within 6-12 months, procurement questionnaires at sophisticated corporate clients will begin including AI governance questions. Within 18-24 months, smaller firms will start fielding those questions.

Building the four-element program above takes a day. Building it after a client asks for it — under pressure, with a relationship at stake — takes significantly longer.

The firms that will be ahead of this aren't the ones waiting for the RFP to arrive. They're the ones writing their AI use policy this week.


Related Reading

Frequently Asked Questions

What is ISO/IEC 42001:2023 and why does K&L Gates getting certified matter?

ISO/IEC 42001:2023 is an international standard for AI Management Systems (AIMS). It establishes a formal framework for responsible AI development and oversight covering accountability, risk management, ethics, transparency, data protection, and regulatory compliance. K&L Gates — a global Am Law 100 firm — earned the certification on March 9, 2026, making it one of the first law firms globally to achieve the standard. It matters because certification by a firm of K&L Gates's size signals that enterprise clients are beginning to expect AI governance credentials from their outside counsel. It's the same progression that happened with cybersecurity certifications: large firms adopt first, client requirements trickle down to smaller regional and boutique firms within 12-24 months.

Does a small law firm need ISO 42001 certification?

No — ISO 42001 is expensive to implement and certify, and designed for organizations with dedicated compliance resources. A 10-person law firm doesn't need it. What the certification signals is what substantive elements clients will eventually expect. Those elements — AI use policy, output verification protocol, client disclosure language, data handling procedures — are within reach of any small firm and should be in place regardless of whether you pursue the formal certification. Think of ISO 42001 as the enterprise version; the four-element minimum viable governance program below is the small-firm version.

What are the four elements every small law firm should have in its AI governance program?

The four elements that mirror ISO 42001's core substance at a small-firm scale are: (1) Approved AI tools list — what tools your firm permits for what purposes, reviewed quarterly; (2) Output verification protocol — the standard that AI-generated work product must be reviewed by a licensed attorney before delivery to a client or court; (3) Client disclosure language — standard engagement letter language disclosing AI use, as required by ABA Opinion 512 and many state bars; (4) Client data handling rules — which AI vendors have appropriate data protection agreements, and what client information may be sent to which tools. These four documents can be created in a day and satisfy most current professional responsibility requirements.

How do I communicate my firm's AI governance to clients and prospects?

The simplest communication is a one-paragraph 'how we handle AI' section on your website's process or methodology page. Something like: 'Our firm uses AI-assisted tools in client service delivery. Our AI governance program includes an approved tools policy, mandatory attorney review of all AI-generated work product, and client disclosure in all engagement letters. All AI use complies with ABA Formal Opinion 512 and applicable state bar guidance.' This doesn't require a certification — it demonstrates that you've thought carefully about AI governance, which is what sophisticated clients are evaluating.

Which clients are most likely to ask about AI governance?

Corporate legal departments with their own AI programs, financial services and insurance clients with regulatory compliance obligations, healthcare clients with HIPAA concerns, and sophisticated procurement operations at mid-size to large companies. If a GC at your corporate client is 87% likely to be using AI internally (FTI Report, March 2026), they understand what AI governance looks like. They will notice if you don't have a policy. The clients least likely to ask are individuals and very small business owners who don't have their own AI programs.

Get the weekly briefing

AI adoption intelligence for accounting, law, and consulting firms. Free to start.

Free weekly digest. No spam. Unsubscribe anytime.