Thomson Reuters Says CoCounsel Now Does 'Human-Level' Work. Here's What That Claim Actually Means for Your Firm.

Published September 27, 2025 · By The Crossing Report

Published: March 14, 2026 | By: The Crossing Report | 8 min read


Summary

Thomson Reuters previewed a new version of CoCounsel that it calls "human-level" — a system that understands complex tasks, does its own research, and delivers autonomous work product across legal, tax, accounting, audit, risk, and compliance work. CoCounsel now has 1 million users. Before you believe the headline and start wondering what this means for your firm, three questions are worth asking. And before you reduce your review process, there are three things every small firm owner should check.


What Thomson Reuters Actually Said

In February and March 2026, Thomson Reuters released two significant CoCounsel updates.

First, the milestone: CoCounsel reached 1 million users across 107 countries — three years after launch. A quarter of Fortune 1000 companies use it. And TR expanded coverage beyond law to explicitly include tax, accounting, audit, risk, and compliance professionals.

Second, the claim: TR previewed a new version of CoCounsel that it describes as delivering "human-level" work. The specific language: the system understands complex tasks, conducts its own research, and delivers autonomous work product. Accounting Today covered the announcement directly.

This is a significant marketing claim. It also raises immediate questions for every professional services firm owner who reads it.


Three Questions Before You Believe the Headline

Question 1: Human-level on what benchmark?

Every major AI vendor has made some version of the "human-level" claim in the past 12 months. Harvey reported that GPT-5.4 scored 91% on BigLaw Bench for document-heavy legal work. Anthropic, Google, and OpenAI publish benchmark results quarterly.

"Human-level" is always a benchmark claim. It means the system performed at human-level accuracy on a specific, curated test set of tasks. It does not mean:

  • The tool is error-free across all tasks
  • The performance holds for your jurisdiction, practice area, or document type
  • The benchmark tasks resemble the actual complexity of your client work
  • 100% accuracy — Harvey's 91% on BigLaw Bench means 9% error rate on the hardest document-heavy legal tasks

When a vendor says "human-level," the next question is always: which humans, doing which tasks, measured how? Until TR publishes the methodology, "human-level" is a headline, not a specification.

Question 2: Which tasks is TR claiming this for?

CoCounsel is built on Thomson Reuters' verified legal and accounting databases — Westlaw, Practical Law, CS Professional Suite. Its strengths are in research-intensive tasks: legal research, tax authority lookups, contract review, due diligence document triage.

"Human-level" performance on Westlaw-backed legal research is plausible and increasingly verifiable. "Human-level" performance on the judgment call about how to advise a specific client on a specific matter — the synthesis of legal analysis with a client's risk tolerance, business context, and relationship history — is a different claim entirely, and one no AI vendor is making credibly.

The tasks CoCounsel does well are the tasks that have historically been performed by junior associates and staff accountants at high volume and moderate leverage. The tasks that remain distinctly human are the ones that require synthesis, judgment, client knowledge, and professional accountability.

This distinction matters more than the headline.

Question 3: Does this change your supervision obligation?

No.

ABA Formal Opinion 512 is unambiguous: lawyers must maintain competent oversight of AI-generated output regardless of the AI tool's claimed capabilities. The professional standard is not "did the AI vendor say it was human-level?" The standard is "did the attorney exercise appropriate professional judgment in reviewing the output?"

For accountants, AICPA quality management standards apply the same logic: you remain responsible for the accuracy of work product delivered to clients, whether AI assisted in producing it or not.

A vendor's "human-level" claim is a marketing statement, not a defense in a malpractice proceeding. Your review obligation is not reduced because CoCounsel — or any other tool — claims human-level performance.


Three Practical Implications for Your Firm

Implication 1: Pricing pressure on routine work is real and accelerating

If CoCounsel can deliver human-level due diligence document review, contract review, and legal research — and do it at scale for large firms — those tasks become faster and cheaper at large firms.

Your clients know this. The Apperio/BestLawFirms March 2026 survey found that 61% of general counsel plan to pressure law firms to reprice when AI is doing the work. Only 6% of law firms have proactively offered alternative pricing models.

For accounting clients, the signal is equivalent: if an AI can do tax research and audit documentation at human-level accuracy, why should a client pay hourly rates for that work?

The practical implication is not panic. It's clarity about where your value actually lives:

  • Commoditized work (standard research, document review, routine tax prep): expect pricing pressure. AI is doing this faster and cheaper. Competing on volume and price against large firms with enterprise AI tools is not a winning strategy for a 10-person firm.

  • Judgment work (complex matters, client-specific advice, novel situations, relationships, accountability): this is where small firms compete, and where AI — at any claimed capability level — cannot substitute. The attorney's judgment, professional accountability, and client knowledge are not replicable.

The "human-level" announcement accelerates a sorting that was already happening. It's better to know now.

Implication 2: If you use CoCounsel, your malpractice carrier may be interested

Thomson Reuters' positioning of CoCounsel as "human-level" creates a specific professional liability question for any firm that relies on it.

Here is the scenario to think through: your firm uses CoCounsel for legal research. An associate relies on CoCounsel output without full independent review — because, after all, TR says it's human-level. The output contains an error. The client suffers harm.

Your malpractice defense now has to address: why did the attorney reduce their review process based on a vendor marketing claim? The answer "the vendor said it was human-level" is not a satisfying response to a bar complaint or a malpractice suit.

This week: contact your malpractice carrier and ask two questions.

  1. Does your current policy cover claims arising from errors in AI-generated work product that was reviewed and used by the firm?
  2. Is there anything your carrier wants to know about how your firm uses AI tools like CoCounsel?

Get those answers in writing before a claim is filed.

Implication 3: Know the difference between authoritative AI and operational AI — and which you actually have

CoCounsel is what the industry now calls "authoritative AI": purpose-built on verified databases for tasks where accuracy is legally and professionally consequential. It's designed for legal research, tax authority lookups, and document work where a wrong answer can result in sanctions, malpractice, or regulatory violation.

Claude Cowork, Microsoft Copilot, and similar general-purpose AI tools are "operational AI": built for drafting, summarization, client communication, workflow automation, and document processing. These tools are not drawing on verified legal or tax databases — they're applying powerful language models to whatever you give them.

Most small professional services firms need both categories. And most small firms underinvest in at least one.

If you're a law firm on Westlaw Precision, CoCounsel may already be bundled into your subscription — and you may not have turned it on. If you're an accounting firm on CS Professional Suite, TR's AI features may be available to you already.

And if you don't have authoritative AI for research-intensive work, you're relying on general-purpose AI for tasks where the verified-database tools perform significantly better. That's a risk gap worth closing.

Related Reading: Why Generic AI Tools Don't Work for Your Firm — And What Purpose-Built Tools Are Getting Right


What "Human-Level" AI Means for Your Business — Honestly

Here is the version that doesn't come with a press release attached.

Large professional services firms — the ones with innovation teams, enterprise software contracts, and dedicated AI deployment budgets — are integrating CoCounsel and tools like it at scale. They are getting faster and cheaper on routine, high-volume work. On standard due diligence, research-heavy matters, and document review, the efficiency gap between a large firm with enterprise AI and a 10-person firm without it is widening.

That's real. It's not catastrophizing — it's what the data shows.

What the data also shows is that the firms capturing new clients in 2026 are not winning on AI adoption alone. They're winning on the thing AI can't replace: the owner who picks up the phone, the senior partner who knows the client's situation, the accountant who catches the thing the model missed because she knows how this particular industry behaves.

The strategic move for a small professional services firm in 2026 is not to try to match a large firm's AI deployment. It's to use AI to get the commodity work done faster — so that the hours you free up go into the judgment work where you're irreplaceable.

"Human-level" AI makes that strategy more urgent. It also makes it more viable.


The Action Item This Week

Three things to do before you file this away:

  1. Check whether CoCounsel is already in your subscription. If you're on Westlaw Precision or CS Professional Suite, you may have AI features you haven't activated. Log in and check. If you're not sure, call your TR account rep.

  2. Review your review process. Whatever AI tools your firm uses, write down the current review step: who reviews AI-generated output, what they're checking for, and how that review is documented. If the answer is "we trust the tool" — tighten that up before a vendor's benchmark claim creates a supervision problem.

  3. Have the pricing conversation internally. What work does your firm do that AI can now do at human-level speed? Where is pricing pressure going to come from in the next 12 months? Having this conversation with your partners or leadership team now — before a client raises it — puts you in the advisory role, not the defensive one.

The firms that will navigate the next 18 months well are not the ones that wait to see how "human-level" AI affects them. They're the ones that decide — clearly, deliberately — which work they're going to compete on.


The Crossing Report covers the transition to AI for professional services firm owners — accounting, law, consulting, staffing, and marketing agencies. Subscribe here for weekly insights on what's changing and exactly what to do next.


Related reading:

Frequently Asked Questions

What did Thomson Reuters claim about CoCounsel's capabilities?

Thomson Reuters previewed a new version of CoCounsel described as delivering 'human-level' work across legal tasks, tax, due diligence, regulatory compliance, and contract review. TR defines 'human-level' as: the system understands complex tasks, conducts its own research, and delivers autonomous work product. CoCounsel has now reached 1 million users across 107 countries, including legal, tax, accounting, audit, risk, and compliance professionals.

What does 'human-level' AI actually mean in practice?

'Human-level' is a benchmark claim, not an absolute one. It means CoCounsel performed at human-level accuracy on specific test tasks — typically a curated benchmark set. It does not mean the tool is error-free, that all task types are covered, or that the performance holds across every jurisdiction, practice area, or document type your firm works with. Every major AI vendor has made some version of this claim in 2025-2026. Harvey's GPT-5.4 integration scored 91% on BigLaw Bench for document-heavy legal work. 91% is not 100% — the review obligation remains.

What is the malpractice exposure if a law firm relies on 'human-level' AI output without adequate review?

If you reduce your review process because an AI vendor claimed human-level performance, and a client suffers harm as a result of an undetected error in AI-generated work, you have a supervision liability problem. ABA Formal Opinion 512 requires competent oversight of AI-generated output — regardless of the AI vendor's benchmark claims. The standard is not 'did the tool say it was human-level?' The standard is 'did the attorney exercise appropriate professional judgment in reviewing the output?' Malpractice insurers are watching this space carefully.

Do accounting firms face similar risks when relying on AI output claims?

Yes. Thomson Reuters explicitly extended CoCounsel's coverage to tax, accounting, audit, risk, and compliance professionals — not just lawyers. Accounting Today covered the announcement. For a CPA firm using AI for tax research, audit documentation, or financial analysis, the professional standard is the same as in law: you remain responsible for the accuracy of the work product you deliver to clients, regardless of whether AI assisted in producing it. AICPA professional standards on quality management apply. The AI vendor's benchmark claim is not a defense.

Should a small law firm or accounting firm be scared of the Thomson Reuters CoCounsel announcement?

No — but you should be clear-eyed about what it means. Large firms deploying CoCounsel at scale gain an efficiency advantage on routine, high-volume work: contract review, due diligence, research summaries, tax research. That work gets done faster and cheaper. For a small firm, this matters in two ways: (1) on commoditized work, you will face pricing pressure from clients who know AI is doing it; (2) on complex, judgment-intensive work — the kind that requires your expertise, your relationships, and your knowledge of a client's specific situation — AI at any level cannot replace what you deliver. The strategic response is to lean into the second category.

How is CoCounsel different from Claude Cowork, Copilot, or other AI tools?

CoCounsel is purpose-built for legal and accounting research on TR's verified databases — it's what the industry calls 'authoritative AI.' It draws on Westlaw, Practical Law, CS Professional Suite, and verified legal/tax content. It's designed for tasks where a wrong citation can result in sanctions or malpractice. Claude Cowork and Microsoft Copilot are general-purpose AI tools suited for drafting, summarization, document review, client communication, and workflow automation. Most small firms need both categories. The distinction matters because the right tool depends on the task: use authoritative AI for legal research and tax authority lookups; use operational AI for drafting, summarization, and document workflows.

Get the weekly briefing

AI adoption intelligence for accounting, law, and consulting firms. Free to start.

Free weekly digest. No spam. Unsubscribe anytime.