Two Types of Legal AI. Which One Should Your Law Firm Actually Buy?

Published December 4, 2025 · By The Crossing Report

Published: March 15, 2026 | By: The Crossing Report | 8 min read


Summary

Legal AI is splitting in two, and buying the wrong type is now a compliance decision, not just a features choice. Fortune's landmark March 2026 analysis documented the emerging schism: foundation model AI (Claude, ChatGPT, Gemini) vs. purpose-built legal AI (Harvey, CoCounsel, August). The tools look similar from the outside — both generate text, both assist with legal drafting, both are accessed through a browser. They are not similar. Here's the decision framework for a solo or small law firm deciding what to buy in 2026.


The Split: What It Is and Why It Happened

The legal AI market in 2023–2024 was largely unified around a single approach: take a large language model, add a legal-specific wrapper (training data, interface, access controls), and sell it to law firms.

The market is now bifurcating.

Foundation model AI refers to the general-purpose LLMs that power tools like ChatGPT (OpenAI), Claude (Anthropic), Gemini (Google), and Copilot (Microsoft). These models are trained on internet-scale data, including legal documents, and can perform legal tasks with useful quality. They are cheap — often $20–30/user/month for business tiers. They are fast. They are general: one tool handles drafting, research, client summaries, administrative tasks, and everything else.

Purpose-built legal AI refers to tools designed from the ground up for law firm use, often trained specifically on legal corpora (Westlaw, LexisNexis, contract databases), and built with features that address the specific obligations of law firm practice: citation verification, ethical walls, data confidentiality, professional responsibility awareness. CoCounsel from Thomson Reuters, Harvey AI, and August AI are the leading examples at different price points.

The schism sharpened in February 2026 when Anthropic launched a Claude legal plugin — essentially bringing a frontier foundation model into legal workflows at foundation model pricing. Legal technology stocks dropped immediately. Investors understood the threat: if a general-purpose model that costs $30/user/month can approximate the legal reasoning of a purpose-built tool that costs $200/user/month, the pricing premium for purpose-built tools comes under pressure.

But the investor reaction overread the immediate threat. For a small law firm deciding what to buy this quarter, the pricing comparison misses the compliance question.


The Compliance Gap Between the Two Types

The difference between foundation models and purpose-built legal AI is not primarily quality. The current generation of foundation models (Claude Sonnet 4.6, GPT-5.4) produce high-quality legal text. The difference is reliability of compliance-critical features.

Citation Verification

This is the most consequential gap.

Purpose-built legal AI tools — CoCounsel, LexisNexis+ AI with Protégé, Westlaw Edge, Harvey AI — are connected to their underlying legal databases. When they cite a case, they are citing from a verified legal corpus. The tools are designed to catch and flag hallucinated citations before they reach the attorney.

General-purpose AI tools are not connected to a legal database. They generate text that sounds authoritative, including citations to cases that do not exist. This is not a bug being fixed — it is a structural property of how language models work. The model predicts the most likely next token; sometimes the most likely citation is one that doesn't exist.

The practical consequence is now in federal and state appellate court opinions: attorneys who filed documents containing AI-generated citations to nonexistent cases have been sanctioned. The 4th Circuit's In re Nwaubani and the California Court of Appeal's March 2026 first state-level AI sanctions opinion both document the same failure. The sanction is not for using AI — it's for filing unverified AI output.

If you use a general-purpose AI tool for legal research or drafting, manual citation verification against Westlaw, LexisNexis, or Fastcase is required before any document leaves the firm. This is the minimum compliance floor, regardless of which AI tool you use.

Data Confidentiality

Standard consumer and business tiers of general-purpose AI tools have data handling policies that may allow inputs to be used for model training. Pasting client facts, matter details, or confidential communications into a standard ChatGPT or Claude session creates confidentiality risk under attorney-client privilege rules and ABA Formal Opinion 512.

Enterprise tiers of these tools — ChatGPT Enterprise, Claude Enterprise — offer privacy protections that address this concern. But the enterprise tier costs significantly more and typically requires a security review and minimum seat commitment that small firms may not meet.

Purpose-built legal AI tools are designed with law firm data environments in mind. Client data stays within the firm's provisioned environment. Ethical walls and conflicts infrastructure are built into the platform architecture. These are not add-ons — they are the core design rationale.

Professional Responsibility Awareness

ABA Formal Opinion 512 establishes that competent AI use requires attorneys to understand the technology they are using — including its limitations, its failure modes, and the supervision obligations it creates. Foundation models have no awareness of jurisdiction-specific bar rules, disclosure requirements, or the professional responsibility context of the work being generated.

Purpose-built legal AI tools — particularly CoCounsel and August — are designed with bar ethics guidance incorporated. They surface disclosure requirements, flag potential privilege issues, and generate output with the professional responsibility context built in.


The Decision Framework: What to Buy at What Firm Size

The right answer depends on what you are doing with AI and what your firm's risk tolerance is.

For a Solo or 2-3 Attorney Firm

Primary use cases: Drafting assistance, client communication templates, internal task automation, matter summarization.

Recommendation: Start with August AI (purpose-built, small-firm pricing, self-serve free trial, no sales call required) for legal drafting and matter management. Use the enterprise tier of a foundation model (Claude Enterprise or ChatGPT Enterprise) for internal administrative tasks — scheduling, emails, document summarization — where client confidentiality is not implicated.

Do not use: Free or standard business tiers of foundation models for any work involving client-specific facts. The data handling risk is not worth the cost savings.

For a 4-15 Attorney Firm

Primary use cases: Legal research with citations, contract review, client advisory work, compliance drafting.

Recommendation: Add CoCounsel (Thomson Reuters, $225/user/month, no seat minimum) for legal research tasks where citation accuracy matters. CoCounsel's citation verification is the specific feature that protects you from the sanctions pattern documented in March 2026. Continue using August or a foundation model enterprise tier for drafting and administrative work.

Investment rationale: One sanctions event — a public reprimand, a malpractice claim, a client complaint arising from an AI-generated error — costs more than a year's CoCounsel subscription.

For a 15+ Attorney Firm

Evaluate: Harvey AI for complex transactional and litigation workflow integration (enterprise pricing, enterprise deployment). Intapp Celeste (limited release H1 2026) for firms already on Intapp practice management — brings governed agentic AI into your existing platform.

Maintain: A clear internal AI policy (what tools are approved, what oversight is required, what disclosure obligations exist) regardless of which tools you deploy.


The Market Is Still Moving: What to Watch

The legal AI market will look different in 12 months than it does today.

Foundation models are improving rapidly. Claude Sonnet 4.6 scored highly on Harvey's BigLaw Bench for document-heavy legal work. OpenAI's GPT-5.4 showed strong performance on legal reasoning tasks. As these models improve, some of the gap between foundation models and purpose-built tools in raw legal reasoning quality will narrow.

Purpose-built tools are integrating with workflows. Harvey's partnership with A&O Shearman for agentic multi-step legal workflows (antitrust filing analysis, fund formation, loan review) signals where enterprise legal AI is heading. The value of purpose-built tools is shifting from "better legal output" toward "AI that executes multi-step legal workflows within a governed framework."

Pricing will compress. Foundation model pricing and competition among legal AI vendors will put downward pressure on per-seat costs. The tools that cost $200/month today may cost $50/month in 18 months. This is not a reason to wait — it's a reason to build the workflow and compliance infrastructure now, while competitors who are waiting fall further behind.


The One-Sentence Decision Rule

If the output will be filed with a court, sent to opposing counsel, or delivered to a client as legal advice: use purpose-built legal AI with citation verification, or verify every citation manually against an authoritative legal database.

For everything else: foundation models with enterprise data handling are a reasonable, cost-effective choice — with appropriate supervision.

The split in legal AI is real. The compliance obligations that go with it are settled law. The choice of which type of tool to deploy is, at its core, a choice about where your firm's professional responsibility obligations live.


The Crossing Report covers AI adoption for professional services firm owners. Subscribe for weekly intelligence on what AI means for your practice.

Related:

Frequently Asked Questions

What is the difference between foundation model AI and purpose-built legal AI?

Foundation model AI refers to general-purpose large language models — Claude, ChatGPT, Gemini, Copilot — accessed through a standard interface without legal-specific training or compliance features. Purpose-built legal AI refers to tools specifically designed for law firm use, trained on legal corpora (case law, statutes, contracts), and built with professional responsibility features like citation verification, ethical walls, data confidentiality protections, and bar compliance guidance. CoCounsel (Thomson Reuters), Harvey AI, and August AI are examples of purpose-built legal AI. The choice between them is not just a features comparison — it is a compliance and liability choice.

Is it acceptable for small law firms to use ChatGPT or Claude for legal work?

General-purpose AI tools can be used for certain categories of legal work with appropriate safeguards, but they carry specific risks that purpose-built tools are designed to mitigate. The key risks: (1) No citation verification — general-purpose AI will generate citations to cases that do not exist; every citation must be manually verified before use in any filing; (2) Data confidentiality — standard consumer tiers of general-purpose AI may use inputs to train future models, creating confidentiality risk for client communications; (3) No professional responsibility grounding — general-purpose AI does not know your jurisdiction's ethics rules, disclosure obligations, or fee agreement requirements. For drafting assistance and internal tasks, general-purpose tools with appropriate oversight are defensible. For research citations and client-facing work, purpose-built tools or rigorous verification protocols are required.

What does purpose-built legal AI actually do that general-purpose AI doesn't?

The core differences: (1) Legal corpora training — purpose-built tools are trained on case law, statutes, contracts, and legal secondary sources, producing output grounded in actual legal authority rather than general language patterns; (2) Citation verification — tools like CoCounsel and Lexis+ AI check their output against their underlying legal databases and flag or correct citations that can't be verified; (3) Ethical walls and data isolation — enterprise legal AI tools keep client data within the firm's data environment and apply conflicts-checking infrastructure; (4) Professional responsibility awareness — tools built for law firms incorporate guidance from bar opinions (including ABA Formal Opinion 512) about AI use, disclosure requirements, and supervision obligations.

How much does purpose-built legal AI cost for a small law firm?

As of March 2026: CoCounsel from Thomson Reuters starts at approximately $225/user/month with no seat minimums, making it accessible to solo and small firm attorneys. August AI offers self-serve access with a free trial — one of the few purpose-built legal AI tools designed explicitly for small firms without a sales call requirement. Harvey AI is currently priced for larger firm deployments. LexisNexis+ AI with Protégé is bundled into Lexis subscription tiers. For a 3-10 person firm, August AI (for drafting and matter management) and CoCounsel (for research with citation verification) cover the most common use cases.

What happened to legal tech stocks after Anthropic entered the legal market?

When Anthropic launched its Claude legal plugin in February 2026 — bringing a general-purpose foundation model directly into legal workflows with lower per-user cost than most purpose-built tools — legal technology stocks dropped significantly. Investors interpreted the move as a threat to purpose-built legal AI tools that charged premium prices for features that a frontier foundation model might approximate at lower cost. The market reaction reflected real pressure: if foundation models improve rapidly enough on citation reliability, data protection, and legal reasoning, some of the pricing premium for purpose-built tools will compress. The question for small firm owners is not which side wins — it's which type of tool they need today.

Get the weekly briefing

AI adoption intelligence for accounting, law, and consulting firms. Free to start.

Free weekly digest. No spam. Unsubscribe anytime.