If OpenAI's Law Firm Can Hallucinate in Court, No Firm Is Exempt — Here's the Protocol That Would Have Stopped It
If OpenAI's Law Firm Can Hallucinate in Court, No Firm Is Exempt — Here's the Protocol That Would Have Stopped It
On April 9, 2026, Sullivan & Cromwell filed an emergency motion in a federal bankruptcy proceeding in the Southern District of New York. The motion was for Prince Global Holdings, a Chapter 15 case. It contained more than 40 errors. Opposing counsel at Boies Schiller Flexner caught them.
On April 18, 2026, Andrew Dietderich — S&C's co-head of restructuring — filed an emergency letter to the presiding judge apologizing for the filing. The errors were AI hallucinations: fabricated citations, invented content, the kind of plausible-looking legal fiction that AI generates when left unverified.
The coverage that followed came from Above the Law, CNN Business, and Legal Cheek. But they weren't just covering another AI hallucination story. They were covering a specific, almost improbable irony.
Sullivan & Cromwell formally advises OpenAI on "the safe and ethical deployment of artificial intelligence." That's on the firm's own website.
This is the story that rewrites the risk calculus for every professional services firm owner who thought they were in the clear.
What Sullivan & Cromwell Did (and Didn't Do)
The S&C filing is notable not because the firm was reckless. It's notable because the firm was sophisticated. Sullivan & Cromwell is a white-shoe firm with 900 lawyers, deep resources, and a formal AI advisory practice. If any firm had the processes and pedigree to avoid this outcome, it was S&C.
That's exactly the point.
The S&C filing didn't fail because of ignorance or carelessness from a junior attorney at a solo practice. It failed because the verification step was absent from the workflow on that particular filing. Forty-plus errors made it through to a federal bankruptcy court because no one ran the output against an authoritative source before it was filed.
The lesson is not that S&C is incompetent. The lesson is that prestige, process, and AI policy are insufficient protection. The only protection is a verification step that runs on every filing, every time — not as a policy aspiration, but as a practiced workflow with a named person responsible and a log that proves it happened.
The Legal Standard Just Changed — You Can't Claim "AI Did It"
Three weeks before the S&C story broke, the Sixth Circuit issued what is now the largest appellate AI sanctions order in U.S. history.
In Whiting v. City of Athens, Tennessee, attorneys Van R. Irion and Russ Egli submitted consolidated appellate briefs containing fabricated AI citations. The Sixth Circuit sanctioned them a combined $116,315.09 — each ordered to pay $15,000 in punitive fines, plus joint responsibility for full appellate attorney fees and double costs.
The dollar figure isn't the most important part of the ruling. This is:
The court did not require proof that AI generated the fabricated citations. The court required only that the attorneys had not personally read and verified the citations before filing.
That standard change reshapes every firm's exposure. It is no longer a question of whether you used AI. It is a question of whether you verified what you filed. "The AI hallucinated it" is not a defense. "I didn't know the citation was fabricated" is not a defense. The bar discipline cases reinforce the same standard from a different angle: on April 6, 2026, the California State Bar Court suspended attorney Sepideh Ardestani for submitting AI-generated filings with nonexistent citations. Two more California attorneys — Omid Emile Khalifeh and Steven Thomas Romeyn — face formal disciplinary charges for the same conduct.
The Sixth Circuit's verification standard is now the governing principle. It applies to district courts, appellate courts, and bar discipline proceedings simultaneously.
Why Prestige and Process Are Not Enough
The S&C story is the one that settles the question for any firm owner who was watching the AI sanctions cases and thinking: this doesn't happen to firms like ours.
Here's the version of that thought that has circulated in professional services circles since 2023: the AI hallucination problem is a small-firm problem. It happens when solo practitioners use consumer AI tools without training, without supervision, without processes. It won't happen to a sophisticated firm that has an AI advisory practice and access to purpose-built legal technology.
That version of the story is now factually wrong.
S&C has the resources to buy every legal AI tool on the market. It advises one of the most powerful AI companies in the world. It had enough sophistication to describe its AI advisory posture publicly on its website.
None of that stopped a motion with 40+ hallucinated errors from being filed in federal court.
The S&C incident is not an anomaly. It is the logical result of treating AI verification as something that sophisticated firms naturally do correctly — rather than as a step that must be explicitly designed into every workflow.
The Scale of the Problem in 2026
The S&C case is the highest-profile incident, but it is not isolated.
HEC Paris researcher Damien Charlotin maintains the most comprehensive global database of AI hallucination cases in legal proceedings. As of April 2026, he has catalogued over 1,353 cases globally, with approximately 800 from U.S. courts.
The trajectory in U.S. courts tells the story:
- 2023: Warnings, verbal reprimands, fines under $1,000
- 2024: Sanctions of $1,000–$5,000 per case
- 2025: Sanctions reaching $10,000+ per case
- Q1 2026 alone: Over $145,000 in total attorney sanctions across multiple cases
That Q1 2026 total includes the $110,200 Brigandi sanctions in San Diego — where two attorneys filed 23 fabricated citations across three motions, their client's case was dismissed with prejudice, and the family's legal claim was extinguished permanently. It includes the Sixth Circuit's $116,315.09. It includes California suspension proceedings.
The enforcement campaign is not a cycle. It is an escalation.
The Three-Step Verification Protocol That Works at Any Firm Size
The protocol that would have stopped every one of these incidents is the same. It is not complicated. It is specific, it is practiced, and it produces a paper trail.
Step 1 — Never use general-purpose AI for citation generation
General-purpose AI tools — ChatGPT, Claude standalone, Google Gemini — do not verify citations against authoritative legal databases. They generate text that looks like real citations. The case names are plausible. The formats are correct. The cases themselves may not exist.
Purpose-built legal AI tools with integrated verification — Thomson Reuters CoCounsel, LexisNexis+ with Protégé, Harvey AI with grounding enabled — are designed to ground their output in real legal corpora. They are not infallible, but they include a verification layer that general-purpose tools do not.
If your firm uses any general-purpose AI in any step of document drafting, that tool cannot be the last step before submission. A separate verification run against Westlaw, LexisNexis, or Fastcase is required for every citation in every document before it leaves your firm.
Step 2 — Create a pre-filing verification log
For every filing where AI-assisted drafting was used, maintain a record showing that citation verification was completed. It does not have to be complex: matter number, filing date, list of citations checked, verified by whom, date of verification. A shared spreadsheet or document management note is sufficient.
This log has two functions. First, it forces the verification step to actually happen — you cannot fill in the log for a step you skipped. Second, it becomes your evidence of due diligence if the filing is ever challenged. The court isn't looking for perfection. It is looking for the verification step being real and systematic, not retroactively claimed.
Step 3 — Check your court's AI disclosure standing orders
Over 200 U.S. federal courts have issued AI disclosure standing orders or local rules as of April 2026. The requirements vary: some require disclosure of which AI tools were used; others require a certification that AI-generated content was reviewed by the filing attorney. Some courts require disclosure on every filing; others only when AI was used in legal research or argument.
Check the standing orders for every court where your firm regularly files. This is a 10-minute research task per court. The disclosure requirements are not the same as the verification protocol — they are separate obligations. Running the verification protocol does not automatically satisfy a court's disclosure requirement. Both are required.
The Strategy Gap Is Bigger Than the Tool Gap
The S&C incident could be framed as a tool failure — the wrong AI tool used at the wrong step, without verification. But a Thomson Reuters survey of 1,500+ legal and professional services professionals released in early 2026 points to a deeper pattern.
Firms with a written AI strategy are three times more likely to achieve positive ROI from AI adoption than firms without one. Seventy-four percent of total AI economic value flows to the 20% of firms that have actually changed how they work — not just adopted tools.
The gap between those two groups is not budget or technology access. It is whether the firm sat down and wrote out how AI fits into its workflow, who owns each step, and what the verification and oversight requirements are.
The verification protocol is not separate from the written AI strategy. It is the core of it. A firm that writes down "here is how we use AI, here is who verifies the output, here is how we log it" has produced the most critical section of an AI strategy — and the section that courts, bar associations, and clients will ask for first.
One Step to Take This Week
If your firm uses AI in any drafting that produces client-facing work or court filings, do this before your next submission:
Write down — in a shared document — three things: what AI tools your firm currently uses in client work, what the verification step is for each output type, and who is named as responsible for that verification on each matter.
If you cannot fill in all three columns, you have found the gap. The S&C incident is not a cautionary tale about AI. It is a cautionary tale about assuming verification happens without making it explicit.
The Sixth Circuit's standard is now the governing principle: not whether you used AI, but whether you verified what you filed. That standard doesn't require a $116,000 sanction to teach. It requires a shared document and one named person.
The Crossing Report covers AI adoption strategy for professional services firm owners every week. Subscribe at crossing.one for the intelligence that helps you make the crossing from the old way of doing business to the new one.
Frequently Asked Questions
What happened in the Sullivan & Cromwell AI hallucination case?
On April 9, 2026, Sullivan & Cromwell filed an emergency motion in a Chapter 15 bankruptcy proceeding (Prince Global Holdings) in the Southern District of New York. The motion contained more than 40 errors, including AI-fabricated citations and hallucinated legal content, which were caught by opposing counsel at Boies Schiller Flexner. On April 18, 2026, Andrew Dietderich, S&C's co-head of restructuring, filed an emergency letter to the federal judge apologizing for the errors. The story drew immediate coverage from Above the Law, CNN Business, and Legal Cheek because of a specific irony: Sullivan & Cromwell's own website describes the firm's role advising OpenAI on 'the safe and ethical deployment of artificial intelligence.'
Does the Sixth Circuit ruling change what law firms must do to comply?
Yes. In Whiting v. City of Athens, Tennessee, the Sixth Circuit sanctioned two attorneys a combined $116,315.09 for submitting fabricated AI citations in appellate briefs — and crucially, the court did not require proof that AI generated the errors. The court required only that the attorneys had failed to personally verify the citations before filing. This is the new standard: 'Did you use AI?' is no longer the question. 'Did you verify what you filed?' is. That standard now applies across district courts, appellate courts, and bar discipline proceedings simultaneously.
Is a verification protocol different from an AI policy?
Yes, and the Sullivan & Cromwell case illustrates exactly why. An AI policy describes who can use which tools and under what conditions. Sullivan & Cromwell almost certainly had an AI policy — the firm formally advises OpenAI on responsible AI deployment. A verification protocol is different: it is a step-by-step workflow that runs on every filing, confirms that every citation exists in an authoritative legal database, logs who checked it, and assigns explicit responsibility before submission. The S&C incident happened because the verification step was absent from the workflow, not because the firm lacked an AI policy.
What verification tools can a small law firm use?
Purpose-built legal AI tools with integrated citation verification include Thomson Reuters CoCounsel, LexisNexis+ with Protégé, Westlaw Edge, and Harvey AI (with grounding enabled). These tools perform verification checks tied to authoritative legal corpora. General-purpose AI tools — ChatGPT, Claude standalone, Google Gemini — do not verify citations against real case databases. They generate text that looks like real citations. If your firm uses any general-purpose AI in the drafting process, a separate verification step against Westlaw, LexisNexis, or Fastcase is required before the document leaves your firm.
Does the Sullivan & Cromwell hallucination incident affect non-law professional services firms?
The direct legal exposure is specific to court filings, but the principle applies universally. Accounting, consulting, and staffing firms use AI to generate analysis, financial summaries, valuations, regulatory citations, and client reports. If that output contains inaccurate references to statutes, IRS guidance, or market data — and it reaches a client — the liability mechanism is the same: you are responsible for what you delivered. The California Bar's discipline of Sepideh Ardestani and the Sixth Circuit's verification standard set the template. Professional responsibility boards, accounting licensing bodies, and consulting firm clients are all watching the same enforcement wave unfold.
Get the weekly briefing
AI adoption intelligence for accounting, law, and consulting firms. Free to start.
Related Reading
- A Court Just Issued the Largest AI Sanctions in U.S. History — $110,200. Here's the Verification Protocol Every Law Firm Needs Before the Next Filing
- California Just Suspended a Lawyer for AI Hallucinations — The Bar Discipline Era Has Started
- A State Court Just Sanctioned a Lawyer for AI Hallucinations — The Era of State-Level AI Accountability Has Arrived
This is the kind of intelligence premium subscribers get every week.
Deep analysis, cross-sector patterns, and the frameworks that help professional services firms make the crossing.