California Just Suspended a Lawyer for AI Hallucinations — The Bar Discipline Era Has Started

April 19, 20267 min readBy The Crossing Report

California Just Suspended a Lawyer for AI Hallucinations — The Bar Discipline Era Has Started

On April 6, 2026, the State Bar Court of California approved a discipline agreement with attorney Sepideh Ardestani: probation and a brief license suspension, effective immediately.

Her violation: submitting AI-generated court filings containing citations to cases that do not exist.

Two more California attorneys — Omid Emile Khalifeh and Steven Thomas Romeyn — now face formal disciplinary charges for the same conduct in federal court and Orange County Superior Court, respectively.

Get the full picture. Go premium.

Weekly intelligence briefings, deeper analysis, and direct access to the full archive.

This is not the first time an attorney has submitted AI-hallucinated citations. Judges have sanctioned lawyers for this in multiple jurisdictions since 2023. What makes April 2026 different is the mechanism: these are bar discipline cases, not court sanctions. The State Bar is now in the enforcement business on AI misuse.

That changes the exposure calculation for every attorney using AI.


Three Cases, One Pattern

The three California cases share a structure worth understanding:

Sepideh Ardestani filed pleadings in federal court in Sacramento in March 2025 that contained citations to cases the AI fabricated. The State Bar filed charges. On April 6, 2026, the State Bar Court approved a discipline agreement: probation and a brief license suspension. The suspension is in effect.

Omid Emile Khalifeh filed documents in federal court in April 2025 that included AI-hallucinated citations — cases that a Westlaw search would have returned no results for. Charges are pending.

Steven Thomas Romeyn filed documents in Orange County Superior Court in October 2025 with fabricated citations. Charges are pending.

Three attorneys. Three courts. Three case types. One pattern: AI generated the citations, the attorney submitted the filing without verifying the citations against a legal database, and the court discovered the fabrications.

The verification step — cross-referencing every AI-generated citation in Westlaw or LexisNexis — takes under 60 seconds per citation. That 60-second check did not happen. The discipline cases followed.

(Sources: California Courts Newsroom; Hoodline / Daily Journal; KTLA)


What the California Bar's Logic Means for Your Firm

The bar discipline framework matters because it establishes the professional responsibility standard explicitly:

"The AI generated it" is not a defense. If you signed it, you verified it.

California's State Bar is not saying that attorneys cannot use AI. It is saying that attorneys are responsible for what they file. AI is a tool, like a paralegal or a research service — and just like with those tools, the attorney's signature on a filing certifies that the contents are accurate.

This is not new law. It is existing professional responsibility standards applied to a new tool.

What is new is the enforcement action. Prior to April 2026, attorneys using AI faced sanctions from individual judges — an embarrassing outcome but typically a fine and a required disclaimer. Bar discipline is different. Bar discipline affects the license. Bar discipline is a matter of public record. Bar discipline follows the attorney to every future client who Googles their name.

The Ardestani case gives every state bar in the country a template for how to handle AI misuse complaints. Expect the next disciplinary action to come within 12 months, likely in a state with high concentrations of AI-using attorneys: New York, Texas, Florida, Illinois.


The Minimum Viable Verification Protocol

If you do not have a written AI verification protocol at your firm today, this is the week to create one.

The protocol does not need to be complex. It needs three things:

1. A named person and a named step.

Not "we review AI-generated content before it goes out." That's a policy with no accountability structure. The protocol needs to say: [Attorney name or role] reviews all AI-generated research, citations, and draft language before any document is filed or delivered to a client. One name. One step. Written down.

2. A verification requirement for citations.

Every AI-generated citation must be confirmed in Westlaw or LexisNexis before any court filing. Every one. Not spot-checked — confirmed. The Khalifeh filing contained citations that a Westlaw search would have immediately returned as non-existent. A 60-second search per citation is not a burden. It is the minimum.

3. Documentation that the protocol exists.

If a bar inquiry arrives, you will be asked what your firm's oversight process is for AI-generated work product. "We have a policy" is not the same as "here is our policy." The protocol should be a written document — one page, shared with every attorney at the firm — that you can produce on request.

Total time to draft: 20 minutes. The document you create this week is the evidence of professional responsibility you can show a bar investigator if you ever need to.


What This Means If You're Not a Law Firm

The California Bar cases apply specifically to attorneys and their professional responsibility under bar rules. But the underlying accountability principle applies to every licensed professional service.

The California Bar's reasoning is this: you are responsible for verifying the output of any tool you use in your professional work. The AI is a tool. The professional responsibility is yours.

For accounting firms: if AI generates financial analysis, tax research, or client deliverables that go out under a CPA's name, the CPA is responsible for the accuracy of that analysis. A CPA who sends AI-generated financial projections without reviewing the underlying calculations is in the same position as the attorneys who submitted AI-generated citations without verifying them.

For consulting firms: if AI drafts strategic recommendations or analytical reports that reach clients, the consultant is responsible for the accuracy of those recommendations. "The AI wrote the analysis" is not a defense when a client makes decisions based on faulty AI-generated projections.

For staffing firms: if AI scoring eliminates candidates without recruiter review, and that process results in adverse employment action, the staffing firm is responsible for the outcome of that process — including potential exposure under Illinois HB 3773, Texas TRAIGA, or similar employment AI laws.

The California Bar cases are the first formal enforcement action in professional services. They will not be the last, and they will not stay limited to law.


The ABA Context

The ABA published Formal Opinion 512 in 2024, which established that attorneys have professional responsibility obligations when using AI — including duties of competence, confidentiality, and supervision. Opinion 512 was guidance. The Ardestani case is enforcement.

The ABA's guidance and California's enforcement action together frame the post-April-2026 standard: attorneys using AI for legal work are expected to (1) understand how the AI tools they use work well enough to identify their failure modes, (2) supervise AI-generated work product with the same rigor as paralegal work, and (3) maintain client confidentiality when using AI tools that may process client data.

For small and mid-size law firms, "understand how the tool works" means, at minimum, knowing that AI hallucination is a known failure mode of every current language model. It does not require technical expertise. It requires that you have built a verification step into your workflow for the failure modes you know exist.

The California cases demonstrate exactly what happens when that verification step is missing.


Your Action This Week

If you own or manage a law firm:

Build the verification protocol today. Set aside 20 minutes. Draft one document:

  • Who is responsible for reviewing AI-generated content before it reaches a client or court
  • That every AI-generated citation is confirmed in Westlaw or LexisNexis before any filing
  • That client data will not be input into AI tools without confirming the tool's data handling practices comply with your confidentiality obligations

Share it with your attorneys. Put it in your shared drive. Date it.

That document is your professional responsibility evidence. It is also the minimum credible response when a client asks how your firm uses AI.

If you use AI in any professional services capacity:

Review what your firm puts its name on that AI helped produce. Identify the step in your workflow where a credentialed human reviews the AI output before it reaches a client. If that step doesn't exist or isn't documented, it needs to exist — not eventually, this week.

The California Bar cases establish that the enforcement era for AI misuse in professional services has arrived. The firms that built the oversight layer into their workflow before the first enforcement action are the ones that won't be in the next bar association press release.


The Crossing Report covers AI adoption, compliance, and business strategy for professional services firm owners. Published weekly at crossing.one.

Frequently Asked Questions

What happened with the California Bar AI discipline cases?

In April 2026, California's State Bar filed formal disciplinary charges against three attorneys — Omid Emile Khalifeh, Steven Thomas Romeyn, and Sepideh Ardestani — for submitting AI-generated court filings containing citations to cases that do not exist. Ardestani has already agreed to probation and a brief license suspension, approved by the State Bar Court on April 6, 2026. Charges against Khalifeh and Romeyn are pending. All three cases involve AI-hallucinated citations: the AI fabricated case references, and the attorneys submitted them without verifying they were real.

What is an AI hallucination in legal filings?

An AI hallucination occurs when an AI system generates information — in this case, case citations — that appears accurate but is completely fabricated. When attorneys use AI to draft legal research or briefs without verifying each citation against a legal database like Westlaw or LexisNexis, they risk submitting documents that cite cases that simply don't exist. Courts can verify citations in seconds; bar discipline follows when non-existent citations are discovered.

What is the minimum verification protocol a law firm needs for AI-generated content?

The minimum viable verification protocol has three components: (1) a named person responsible for reviewing AI-generated content before it reaches a client or court — not 'we review it,' but a specific person and a specific step in the workflow; (2) a requirement that every AI-generated citation is confirmed in Westlaw or LexisNexis before any filing; and (3) written documentation of both requirements so the protocol can be demonstrated in the event of a bar inquiry. This document should take 20 minutes to draft and should live in your firm's shared drive where all attorneys can reference it.

Does this only affect lawyers, or do the lessons apply to other professional services firms?

The California Bar cases are specific to law firms, but the underlying accountability principle applies to every professional services firm. The California Bar's reasoning is: if you signed it, you verified it — the AI generating the content is not an excuse for delivering inaccurate work product. Accounting firms sending AI-generated financial analysis without CPA review, consulting firms delivering AI-drafted strategy without a senior advisor check, and staffing firms using AI-scoring to eliminate candidates without recruiter review all face the same professional responsibility exposure. The tool doesn't change who is accountable for the output.

Will other states follow California's approach to AI bar discipline?

Yes. State bars across the country were watching the California proceedings, and the Ardestani case gives them a template for how to handle AI misuse in legal filings. Bar associations in states with high concentrations of AI-using attorneys — New York, Texas, Florida, Illinois — are the most likely to file similar cases next. The ABA has published guidance on AI in legal practice (Formal Opinion 512, 2024), and the California precedent gives state bars a concrete enforcement model to follow. Attorneys should assume their state bar is developing similar enforcement capacity.

Get the weekly briefing

AI adoption intelligence for accounting, law, and consulting firms. Free to start.

Related Reading

This is the kind of intelligence premium subscribers get every week.

Deep analysis, cross-sector patterns, and the frameworks that help professional services firms make the crossing.