Legalweek Named It 'AI Slop.' Here's the 4-Check Protocol That Keeps Your Firm Out of It.
Published March 13, 2026 · By The Crossing Report
Published: March 2026 | By: The Crossing Report | 7 min read
At Legalweek 2026 in New York — the industry's biggest legal technology conference, March 9-12 — the opening panels named a problem that's been building for two years and finally has a vocabulary: AI Slop.
The term refers to undifferentiated, low-quality AI output that's making its way into legal work product — briefs, contracts, client memos, court filings — without adequate review. It looks professional. It uses correct legal vocabulary. It's organized and formatted properly. And it gets things wrong: facts mischaracterized, citations fabricated, arguments made that don't match the actual case, clauses that don't reflect the deal.
"AI Slop" is not an insult to AI tools. It's an indictment of the workflow that produces it: AI fast, review optional.
As a small law firm owner, this is both a threat and an opportunity.
What AI Slop Is Doing in Practice
Court sanctions for AI-generated hallucinated citations are no longer rare. The 4th Circuit sanctioned an attorney in March 2026 for unverified AI filings — the most recent in a growing line of federal appellate reprimands. California state courts issued the first state appellate AI sanctions opinion in March 2026. Courts are done signaling and starting to rule.
But the bigger risk for most small firms isn't court sanctions — it's clients.
At Legalweek, corporate clients and general counsel made it plain: they're starting to recognize AI slop when they see it. The tell is sameness. When the brief drafted for a slip-and-fall case sounds like the brief drafted for a contract dispute — similar structure, similar hedging, similar "on the other hand" construction — a sophisticated client or opposing counsel notices.
The firms losing business aren't the ones that got sanctioned. They're the ones that produced work product that felt generic. Client couldn't articulate why. They just didn't renew.
The 4-Check Protocol
This is the minimum quality gate before any AI-generated legal work product leaves your firm. Add it to your workflow — however you track tasks — as a checklist on every AI-assisted matter.
Check 1: Citation Verification
Every case or statute the AI cited: (1) exists, (2) stands for what the brief says it stands for, (3) has not been overruled or superseded.
AI tools hallucinate citations with confidence. The citation format is usually correct. The case name sounds real. The legal principle attributed to it is often plausible. That's what makes it dangerous — you can't tell from reading it that the case doesn't exist.
How to verify: search the citation in Westlaw, Lexis, or Fastcase. Read the relevant passage, not just the headnote. If the AI cited a case for a specific proposition, confirm the court actually said that.
Time: 10-20 minutes for a typical motion. Non-negotiable.
Check 2: Facts Alignment
The AI's version of your client's facts versus your actual file.
AI trained on legal text knows what legal briefs usually say. It will fill in facts that are plausible for the pattern — sometimes correctly, sometimes from generic legal language that sounds case-specific but isn't.
How to verify: read the AI draft's factual recitation paragraph by paragraph against your actual file. Does every specific fact trace to a document or note in the matter? If the AI described the sequence of events and you find yourself assuming it's correct without checking — that's the risk.
Check 3: Jurisdiction Fit
The legal standard cited, the procedural requirements referenced, and the filing format used must match the actual court or jurisdiction — not a generic version.
AI is trained on legal text from everywhere. When it writes a summary judgment standard, it may write the Celotex/Anderson federal standard when you're filing in a state court with different threshold requirements. When it drafts notice provisions, it may default to California rules when you're in Texas. When it sets out a contract's governing law, it may default to New York commercial law principles that don't govern your transaction.
How to verify: identify every legal standard explicitly stated in the draft. Confirm the source is specific to your jurisdiction. For procedural requirements (page limits, filing deadlines, font requirements, required disclosures), verify against the court's current local rules — AI's training data may not reflect recent amendments.
Check 4: Strategy Alignment
The argument the AI drafted is the argument you actually want to make — not just an argument that could be made from these facts.
AI optimizes for a coherent, defensible argument based on the inputs you give it. It will produce a brief that argues something. Whether it argues the right thing — the theory that fits your client's goals, risk tolerance, and relationship with the counterparty — is a judgment call the AI can't make.
This is the check that doesn't have a verification procedure. It requires you to read the AI draft with the question: "If opposing counsel saw this, would they be relieved or worried?" If it's well-drafted but argues the wrong theory, you've done the procedural work for the wrong outcome.
The Opportunity in the Slop Problem
Legalweek's "AI Slop Phase" framing is useful beyond the cautionary dimension. It tells you what the differentiation strategy is for the next two years.
In a market where AI tools are broadly available, the firms that will grow are the ones that produce AI-assisted work at higher quality than firms that produce AI-generated work without quality controls.
The small law firm that can say — credibly — "we use AI to work faster and we have a review protocol that catches what AI gets wrong" is not the same firm as the one that says "we use AI tools." The first statement is a practice standard. The second is a commodity claim.
Three things every small law firm should do to operationalize this:
1. Write the protocol down. The 4-check list above, adapted to your practice area, as a one-page document. Add it to your matter checklist. It becomes part of your firm's quality standard — and something you can describe to clients when they ask.
2. Train everyone on the citation check. The paralegal, the associate, the partner who insists they'll just read it quickly. If anyone in your firm can send AI-generated work product without running the citation check, your protocol has a gap.
3. Build "AI-assisted, attorney-verified" into your client communication. When you send AI-assisted work product, consider a brief note — "This draft was prepared with AI assistance and reviewed by [attorney name] for accuracy and strategy." This isn't apology language. It's the language of a quality standard that your competitors who are just shipping AI output can't credibly claim.
Where to Start This Week
Run the 4-check protocol on the last piece of AI-generated work product your firm produced — even if it's already been sent. Check one citation. Compare one fact to the file. Verify one jurisdictional standard.
Not because you expect to find an error. Because building the verification habit before it catches something serious is the point.
Related Reading
- 4th Circuit AI Sanctions: The Federal Courts Are Done Waiting
- State Court AI Sanctions: What California's First Opinion Means
- ABA Opinion 512 Compliance Checklist for Law Firms
- Mandatory AI CLE Is Coming — What the State Bar Trend Means for Your Firm
- Legal AI Foundation Models vs. Purpose-Built Tools: Which Does Your Firm Need?
- DescrybeLM: The Free Legal AI Built to Produce Verification-Friendly Output
Frequently Asked Questions
What is 'AI slop' in legal work product?
AI slop is the term that emerged at Legalweek 2026 (March 9-12) to describe undifferentiated, low-quality AI-generated output that is being filed, sent to clients, or submitted to courts without adequate review. It includes AI-generated briefs with generic legal language that doesn't fit the specific facts, contract clauses that don't reflect the negotiated deal, citation errors, and work product that sounds professional but contains errors in substance, strategy, or jurisdiction-specific detail. It's the predictable result of using AI tools for speed without building quality controls to catch what the AI gets wrong.
Why is AI slop becoming more common in law firms?
As AI tools become faster and more accessible, the pressure to use them to cut time per matter increases. The speed is real — AI can draft a motion, contract, or memo in minutes. But the quality-control step — which takes a skilled attorney the same review time regardless of who or what drafted it — gets compressed or skipped when the AI draft looks polished. Courts and clients are beginning to identify work product where the polish is superficial: correct vocabulary, wrong substance; correct format, missing argument; correct citation format, wrong or nonexistent case.
What are the 4 checks to run on AI-generated legal work before sending it?
The four checks are: (1) Citation verification — every case or statute cited exists, stands for what the brief says it stands for, and hasn't been overruled. (2) Facts check — the AI's characterization of the client's facts matches your actual file, not a plausible-sounding version of similar facts. (3) Jurisdiction fit — the legal standard, filing requirements, and procedural rules cited are specific to the actual jurisdiction, not a generic version. (4) Strategy alignment — the argument the AI drafted is the argument you actually want to make, not just an argument that could be made from these facts.
Does this apply to contract work, not just litigation?
Yes. For transactional work, the AI slop risk is different but equally real: AI-generated contract clauses that don't reflect the deal actually negotiated, representations and warranties at generic industry standard rather than specific to the transaction, defined terms used inconsistently across a document, or indemnification language drafted for a different deal structure. The same four checks apply in adapted form: (1) defined term consistency, (2) transaction facts accuracy, (3) jurisdiction and governing law fit, (4) strategic alignment with what your client actually agreed to.
Are courts sanctioning AI slop specifically?
Yes. Federal courts — including the 4th Circuit in March 2026 — have sanctioned attorneys for AI-generated work product that contained hallucinated citations or misrepresented case law. Several state courts have issued standing orders requiring disclosure of AI use in filings. The emerging standard is that an attorney who files AI-generated work is as responsible for its accuracy as if they drafted it themselves — but without the benefit of a human author's natural awareness of what they don't know. The verification requirement is on you, not the AI.