Legal AI Is Now Split in Two — Which Side Does Your Firm Belong On?

Published March 17, 2026 · By The Crossing Report

Fortune published "Legal AI is splitting in two — and most people miss the difference" in March 2026. It documented something small law firm owners need to understand before they add any more AI tools to their practice: legal AI has divided into two distinct categories, and using the wrong one for the wrong task is a malpractice and bar complaint risk.

This isn't about which AI is smarter. It's about which risk profile matches which workflow. And for a small firm that doesn't have an IT department or an innovation committee — just clients and billable work — getting this right matters more than it does at a BigLaw firm with dedicated legal technology staff.


The Split: What Fortune's Analysis Found

Side 1: Accuracy-grounded AI. Tools like CoCounsel (Thomson Reuters), built on Westlaw and Practical Law. These tools are engineered for legal defensibility. They draw on verified legal databases with controlled, cited sources. When CoCounsel produces a case citation, it's retrieved from Westlaw — not hallucinated from the web. The design principle is: output that can be defended in front of a bar auditor or filed in a court proceeding.

Side 2: General-purpose AI. Tools like Microsoft Copilot, Claude.ai, and ChatGPT. These are built for speed, versatility, and breadth — not legal precision. They're trained on the general web and produce statistically likely responses. They are excellent at drafting, summarizing, organizing, and generating first drafts of almost anything. They are not grounded in verified legal databases and should not be treated as authoritative sources for legal research.

The mistake most small firm owners are making — and what LawFuel called "the split that will reshape every law firm's tech stack" — is treating these two categories as interchangeable. They're not.

Using a general-purpose tool for legal research is not just a quality issue. It's a supervision issue. ABA Formal Opinion 512 requires competent oversight of AI output — and "competent" means you must understand what kind of AI you're using well enough to know whether its output is reliable for that task. A hallucinated citation from Copilot reviewed by an attorney who doesn't know Copilot can hallucinate is an ABA 512 problem, not a vendor problem.


The Two-Tier Framework: Task by Task

The practical framework is not "pick accuracy-grounded or general-purpose." It's: assign each workflow in your practice to the appropriate tier.

Tier 1 — Accuracy-Grounded AI (legal research, compliance, citation work)

Use for:

  • Case law research and authority lookups
  • Statutory and regulatory interpretation
  • Contract clause verification and flagging missing provisions
  • Due diligence document triage for accuracy-sensitive facts
  • Citation verification before any document is filed or delivered to opposing counsel
  • Jurisdiction-specific compliance research (NH SB 640, Colorado AI Act applicability, bar ethics questions)

Tools: CoCounsel, Westlaw AI Research, vLex Vincent AI, Spellbook (for contract drafting and review in Word)

Why it costs more: These tools are built on curated, verified legal databases maintained by legal information companies. The cost premium is the cost of accuracy infrastructure. For a 3-person firm doing estate work and divorce, you may be able to do with Westlaw access alone and supplement with Copilot for administrative work. For a 5-person firm doing M&A, regulatory filings, or transactional work — skimping on accuracy-grounded tools is false economy.


Tier 2 — General-Purpose AI (drafting, communication, workflows)

Use for:

  • Drafting client emails and follow-up communications
  • Generating first-draft engagement letters and client update memos
  • Summarizing meeting notes and deposition recordings
  • Creating internal checklists and matter timelines
  • Drafting non-legal documents (proposals, internal memos, staff communications)
  • Organizing and categorizing documents by topic or date
  • Building intake forms and FAQ documents for clients

Tools: Microsoft Copilot ($21/user/month as M365 add-on), Claude.ai ($20/month for Pro), ChatGPT

Why the cost difference matters: Copilot runs inside Microsoft 365 — Word, Outlook, Teams — with no new interface. If your firm already uses M365, Copilot is the lowest-friction way to add general-purpose AI to your practice. At $21/user/month, a 5-person firm spends $105/month and gains AI drafting across every M365 app they already use.

The point is not that Tier 2 tools are "second class." For communication-heavy workflows, general-purpose AI is exactly right — faster, cheaper, and adequate for the risk level. The point is that Tier 2 tools are not appropriate for Tier 1 tasks.


The Budget Math for a Small Firm

Here's what a realistic two-tier AI stack costs for a 5-person firm:

Tier 1 (accuracy-grounded):

  • Westlaw Essentials: ~$100-$300/month (varies by usage tier; contact TR for small firm pricing)
  • CoCounsel access: bundled with some Westlaw plans or available as add-on
  • Spellbook (Word plugin for contract drafting/review): free trial, contact for pricing

Tier 2 (general-purpose):

  • Microsoft Copilot: $21/user/month × 5 = $105/month (requires M365 subscription)
  • Or Claude.ai Pro: $20/month per user

Total monthly investment for both tiers: roughly $250-$500/month for a 5-person firm, depending on Westlaw tier.

For context: if your firm is forgetting to bill 5 hours per week (the ABA time-tracking data), you're losing $78,000+ annually at $300/hour. The two-tier AI stack costs less than a month of that leakage.

The question isn't whether you can afford both tiers. The question is whether you can afford to run one tier for tasks that require the other.


How to Build Your Own Two-Tier Assignment

This is the exercise the Fortune article is implicitly recommending. For every major workflow in your practice:

  1. List the task. Example: contract review before client signs.
  2. Identify the risk level. Is an error here a billing dispute (medium) or a malpractice claim (high)?
  3. Assign the tier. High-risk, citation-sensitive tasks → Tier 1. Communication, drafting, workflow → Tier 2.
  4. Document the assignment. Write it into your AI use policy so every person at your firm makes the same call.

Three examples by firm type:

3-person divorce and estate firm:

  • Client email drafting → Tier 2 (Copilot or Claude.ai)
  • Will and trust drafting from templates → Tier 2 for first draft; attorney review required
  • Jurisdiction-specific inheritance law research → Tier 1 (Westlaw)
  • Court filing research → Tier 1 (Westlaw, CoCounsel if available)

5-person transactional firm (M&A, commercial contracts):

  • Internal memos and client updates → Tier 2
  • First-draft purchase agreements and NDAs → Tier 2 for structure; Tier 1 (Spellbook or CoCounsel) for clause verification and risk flagging
  • Due diligence research on regulatory compliance → Tier 1
  • Regulatory filing research and authority lookup → Tier 1

2-person immigration firm:

  • Client intake summaries and status update emails → Tier 2
  • Case management notes → Tier 2
  • Jurisdiction-specific visa eligibility research → Tier 1
  • USCIS policy lookup and recent decision review → Tier 1

The Bar Compliance Angle

The reason the split matters beyond cost: your bar's AI oversight requirements apply differently to each tier.

ABA Formal Opinion 512 and most state bar guidance on AI require that attorneys maintain competent oversight of AI-generated output. That obligation is not discharged by saying "I used CoCounsel" — but it is meaningfully harder to discharge when you used a general-purpose tool for a research-intensive task.

The courts are watching. A Florida attorney was sanctioned in 2023 for submitting AI-generated briefs with false citations. The lesson wasn't "don't use AI." The lesson was: the supervision obligation requires understanding what the AI can and cannot do, and assigning it only to tasks it can perform reliably. That is exactly the two-tier framework.

If you can explain to your bar association which tasks use Tier 1 tools and which use Tier 2 — and why — you have the supervision posture the opinion is asking for.


The Action This Week

Step 1: Write down the five most time-consuming tasks in your practice.

Step 2: Assign each to Tier 1 (accuracy-grounded, citation risk, defensibility required) or Tier 2 (drafting, communication, workflow, first-draft adequate).

Step 3: For Tier 1 tasks: confirm you have Westlaw access (or equivalent). If you don't, call Thomson Reuters this week for small firm pricing. For Tier 2 tasks: if your firm uses Microsoft 365, activate Copilot this week (microsoft.com/copilot/business, $21/user/month).

Step 4: Write a one-page AI use policy that specifies which tools are approved for which task categories. Share it with every person at your firm who touches client work.

That's the whole framework. You don't need an IT project. You need a list, a tier assignment, and a policy document. The split Fortune documented is already here. The firms that don't acknowledge it are the ones most likely to use the wrong tool for the wrong task — not out of recklessness, but because no one drew the line.

Draw the line.


The Crossing Report covers AI adoption for professional services firms every Monday. Subscribe here.

Frequently Asked Questions

What is the difference between accuracy-grounded AI and general-purpose AI for law firms?

Accuracy-grounded legal AI — tools like CoCounsel (Thomson Reuters), built on Westlaw and Practical Law — is specifically designed for legal research and compliance tasks where a wrong citation can result in sanctions or malpractice. It draws on verified legal databases and is engineered for defensibility under bar audits and court scrutiny. General-purpose AI — tools like Microsoft Copilot, Claude.ai, and ChatGPT — is built for speed and versatility: drafting, summarization, client communication, document review, and workflow automation. It is not grounded in verified legal databases and is not engineered with malpractice defensibility as a core design principle.

Which legal AI tool should a small law firm use?

Most small law firms need both tiers. Use accuracy-grounded AI (CoCounsel, Westlaw AI Research) for legal research, authority lookups, contract review, and any work that will be cited to a client or court. Use general-purpose AI (Microsoft Copilot, Claude.ai) for drafting, client communications, meeting summaries, document organization, and administrative workflows. The two-tier stack is not about picking a winner — it's about assigning the right tool to the right task.

Is Microsoft Copilot safe for legal work?

Microsoft Copilot is safe and useful for many legal workflows: drafting follow-up emails, organizing meeting notes, generating first-draft engagement letters, and creating document templates. It is not appropriate as the sole tool for legal research, case law citation, regulatory compliance lookups, or any output that goes to a client without independent professional review. Copilot does not draw on verified legal databases and should never be used as a replacement for Westlaw, Lexis, or CoCounsel on research-intensive tasks.

How much does CoCounsel cost for a small law firm?

Thomson Reuters has not publicly disclosed CoCounsel's standalone pricing. CoCounsel is available as part of Thomson Reuters' Westlaw subscription packages. Firms already on Westlaw may have CoCounsel access included or available as an add-on — contact your TR account rep. For comparison, Microsoft Copilot is available for $21/user/month as an add-on to any Microsoft 365 subscription. The cost difference is significant; the question is whether the high-stakes work your firm does justifies the accuracy-grounded premium.

What legal tasks should always use accuracy-grounded AI?

The tasks where an error can result in bar complaints, court sanctions, or malpractice claims: case law research, statutory interpretation, regulatory compliance lookups, contract clause verification, due diligence fact-checking, and citation verification in any document filed with a court or delivered to an opposing party. These tasks require AI that draws on verified, up-to-date legal databases — not AI trained on the general web. The rule of thumb: if the output gets a citation, use accuracy-grounded tools.

What legal tasks can use general-purpose AI?

Administrative and communication-heavy tasks where speed matters more than legal precision: drafting client emails, summarizing meeting notes, generating first-draft engagement letters, creating checklists, organizing matter files, reviewing non-legal documents for tone and completeness, and producing internal memos. For these tasks, general-purpose AI (Copilot, Claude.ai) is faster, cheaper, and adequate for the risk level involved — provided a professional reviews the output before it reaches a client.

Get the weekly briefing

AI adoption intelligence for accounting, law, and consulting firms. Free to start.

Free weekly digest. No spam. Unsubscribe anytime.