California Just Made AI Ethics Mandatory for Every Lawyer — Six Rules Changing in 2026
California Just Made AI Ethics Mandatory for Every Lawyer — Six Rules Changing in 2026
On March 13, 2026, California's Committee on Professional Responsibility and Conduct (COPRAC) approved proposed amendments to six Rules of Professional Conduct governing the use of AI in legal practice. The 45-day public comment period closes May 4, 2026.
These are not guidelines. They are not advisory opinions. They are proposed changes to the binding rules that govern your license.
If you practice law in California — or run a firm that does — here's what changes and what you need to do before these rules take effect.
Get the full picture. Go premium.
Weekly intelligence briefings, deeper analysis, and direct access to the full archive.
Why This Matters Beyond California
California has the largest state bar in the United States. When California formalizes AI ethics requirements in its Rules of Professional Conduct, other state bars follow. Florida and New York have already issued detailed AI guidance. California's amendment package is the most comprehensive yet: six rules, one coordinated framework, all binding conduct requirements once adopted.
The rest of the country is watching.
For law firm owners outside California: this is your preview. Plan accordingly.
The Six Rules — In Plain Language
Rule 1.1 — Competence: AI Literacy Is Now Required
Under the proposed amendment, the duty of competence now includes understanding how AI tools work in legal practice. You don't need to be a software engineer. But you do need to understand what the AI tools in your workflow do, what their limitations are, and when to trust their output.
The practical implication: if you're using AI for legal research or drafting without understanding how those tools construct their outputs, you may be practicing outside your competence. "I didn't know the AI could hallucinate" is not a defense.
What to do: Complete at least one structured AI competency training — your state bar's CLE offerings, Clio Academy, or Thomson Reuters' AI training resources — and document it. Make it part of your annual CLE review.
Rule 1.4 — Communication: Client Disclosure When AI Materially Affects the Work
The proposed Rule 1.4 amendment requires disclosure to clients when AI "materially affects" the scope, cost, or decision-making in a representation.
If AI reduces a 10-hour research task to 45 minutes, and you charge for 10 hours, that's a disclosure problem. If AI drafts the brief and you review it, that's a material change in how the work gets done — clients have the right to know.
The rule does not require disclosure for every AI tool in your practice. Grammarly, calendar software, and document management systems don't qualify. The trigger is materiality: does the AI change what you're doing, how long it takes, or what you charge?
What to do: Add an AI disclosure clause to your standard engagement letter now. A simple version: "Our firm uses AI-assisted tools for legal research, drafting, and document review. These tools are supervised by licensed attorneys who are responsible for all work product. You will be informed when AI materially affects the scope or cost of your representation."
Rule 1.6 — Confidentiality: Client Data Shared With AI Must Be Protected
Proposed Rule 1.6 amendments require lawyers to protect client information shared with AI systems with the same diligence as information shared with any other third party.
The practical question: do you know where the data you send to Claude, ChatGPT, or any AI research tool actually goes? Does your vendor's terms of service allow them to use your client data for training? Have you reviewed your AI vendor's data processing agreements?
If you can't answer those questions, you have a compliance gap under this rule — even before it takes effect.
What to do: Review the privacy policy and data processing terms for every AI tool your firm uses. Use enterprise or API-tier products that commit to not training on your data (Claude for Enterprise, Microsoft 365 Copilot with enterprise data protection, Thomson Reuters CoCounsel). Do not run client-identifying information through consumer-tier AI tools that have no data processing commitments.
Rule 3.3 — Candor to Tribunal: Verify Every AI-Generated Citation
This is the one with the most immediate professional risk. Proposed Rule 3.3 amendments require lawyers to independently verify AI-generated legal citations before submitting them in any court filing.
This isn't a new idea — bar associations across the country have been issuing guidance on this for two years. But California is now proposing to make it a binding rule of conduct, not just good practice.
The stakes are real. AI hallucinations in US court filings hit 487 instances in 2025 — more than 10x the count in 2024. Courts are sanctioning lawyers. The California courts are among those paying attention.
What to do: No AI-drafted brief, motion, or pleading leaves your firm without a citation verification step. Run every case cite through Westlaw or Lexis. Verify it exists, says what you claim, and is still good law. This applies to cases cited in passing, not just primary authority. One hallucinated cite in a federal filing has cost lawyers their cases, their clients, and their reputations.
Rules 5.1 and 5.3 — Supervision: Governance Policies Are Now Required
Proposed amendments to Rules 5.1 (Responsibilities of Partners, Managers, and Supervisory Lawyers) and 5.3 (Responsibilities Regarding Nonlawyer Assistance) require firm leaders to establish AI governance policies and ensure nonlawyer staff are trained on AI ethics.
For a solo or small firm owner, this means:
- A written AI use policy. Which tools are approved? For which workflows? What data can and cannot go into them?
- Documented training. Your paralegal, legal assistant, or law clerk who uses AI tools must be trained on the ethics requirements — disclosure, confidentiality protection, citation verification.
- Supervisory review. AI-generated work product does not leave the firm without attorney review and sign-off.
This is not bureaucracy. It's the basic governance infrastructure that every firm using AI should already have. The rule formalizes what good practice looks like.
What to do: Write a one-page AI use policy this week. It doesn't need to be 20 pages — it needs to cover: approved tools, prohibited uses (no client data in consumer AI), citation verification requirements, and disclosure obligations. Send it to everyone who uses AI in your office. File it somewhere you can produce it.
What the Comment Period Means for Your Firm
The 45-day public comment period closes May 4, 2026. This is when California bar members — including law firm owners — have the opportunity to submit comments on the proposed rules.
If you have concerns about specific provisions (the materiality standard in Rule 1.4, the scope of AI governance requirements in Rules 5.1/5.3), now is the time to say so through the formal process. The California State Bar's public comment archive is at calbar.ca.gov.
Whether or not you comment, start the compliance review now. These rules — or something very close to them — are going to be finalized. The firm that begins updating its engagement letters, governance policies, and training documentation today is not scrambling next year.
Your Action List for This Week
- Read the proposed amendments. Spend 30 minutes with the actual text at calbar.ca.gov. Know exactly what each of the six rules says.
- Audit your AI tools. List every AI tool in use at your firm, including what your staff uses. For each tool: is your client data protected? Is it covered by a data processing agreement?
- Update your engagement letter. Add an AI disclosure clause this week. Don't wait for the rules to take effect.
- Write a one-page AI use policy. Approved tools. Prohibited uses. Citation verification requirement. Distribute to staff.
- Schedule AI ethics training. For yourself and for any nonlawyer staff using AI. Document it.
May 4 is 32 days away. The firms that treat this as a compliance deadline to prepare for — rather than a future concern to watch — will be positioned correctly when these rules take effect.
Frequently Asked Questions
What are the California COPRAC AI ethics amendments?
On March 13, 2026, California's Committee on Professional Responsibility and Conduct (COPRAC) approved proposed amendments to six Rules of Professional Conduct governing AI use. The six rules are: Rule 1.1 (Competence — AI literacy now required), Rule 1.4 (Communication — mandatory client disclosure when AI materially affects the representation), Rule 1.6 (Confidentiality — data shared with AI tools must be protected), Rule 3.3 (Candor to Tribunal — AI-generated citations must be independently verified), and Rules 5.1 and 5.3 (Supervision — firm leaders must create AI governance policies and train nonlawyer staff). The 45-day public comment period closes May 4, 2026.
Does California's Rule 1.4 require me to tell clients I'm using AI?
Under the proposed Rule 1.4 amendment, disclosure is required when AI 'materially affects' the scope, cost, or decision-making in a representation. If you use AI to draft a brief, conduct legal research, or generate a contract, and that use materially shapes the outcome or reduces the time involved, disclosure is required. The rule does not require disclosure for every use of AI — basic grammar tools or calendar software wouldn't trigger it. But if AI is doing work your client expects a lawyer to do, or if it changes what you charge, you need to tell them.
What does Rule 3.3 require for AI-generated citations?
Proposed Rule 3.3 (Candor to Tribunal) requires lawyers to independently verify AI-generated legal citations before submitting them in any filing. AI hallucinations in US court filings hit 487 instances in 2025 — more than 10x the 2024 count. You cannot submit a brief, motion, or pleading containing AI-generated case citations without personally verifying that each case exists, says what you claim, and is still good law. Use Westlaw or Lexis to check every AI-generated citation. This is not optional — it's now specifically required conduct under the proposed rules.
What do Rules 5.1 and 5.3 mean for my firm's AI use?
Proposed Rules 5.1 and 5.3 require supervising and managerial lawyers to establish firm-wide AI governance policies and ensure nonlawyer staff are trained on AI ethics requirements. For a 5-15 attorney firm, this means: (1) a written policy on which AI tools are approved for use and for which workflows, (2) a data handling policy covering what client information can be shared with which AI platforms, and (3) documented training for paralegals, legal assistants, and other staff who use AI tools. If you allow your paralegal to use ChatGPT for client-facing work without any governance policy in place, you are currently not in compliance with these proposed rules.
How is California's approach different from other states?
Florida and New York have detailed AI guidance for lawyers, but California's proposed package is the most comprehensive yet — it covers competence, disclosure, confidentiality, candor, and supervision in a single coordinated amendment package. Most other states have issued informal guidance or formal opinions on isolated issues (citation verification, confidentiality). California's rules, once finalized, will be binding conduct rules — not advisory opinions. Given California's size (the largest state bar in the US) and its influence on legal industry standards, these rules are likely to set the template for the next wave of state bar AI ethics regulation.
What should I do before the May 4 public comment deadline?
Three things: (1) Read the proposed amendments at calbar.ca.gov and consider whether your current practice complies with each of the six rules. (2) If you have concerns about implementation or scope, submit a public comment before May 4 — bar associations do read and respond to substantive practitioner comments. (3) Whether or not you comment, begin updating your engagement letters to include an AI disclosure clause now. These rules are going to be finalized in some form. Starting the compliance review today means you're not scrambling after adoption.
Get the weekly briefing
AI adoption intelligence for accounting, law, and consulting firms. Free to start.
Free weekly digest. No spam. Unsubscribe anytime.
Related Reading
- Do Firms Need to Disclose AI Use? (2026)
- A State Court Just Sanctioned a Lawyer for AI Hallucinations — The Era of State-Level AI Accountability Has Arrived
- Washington Just Signed an AI Chatbot Law. Oregon and Georgia Are Next. Here's Your Compliance Checklist.
- 92% of Lawyers Use AI. Only 43% of Small Firms Have Any Policy. Here's the 30-Minute Fix.
- California Is Building a Paper Trail for AI Workplace Decisions — Here's What Firms With CA Staff Need to Know
This is the kind of intelligence premium subscribers get every week.
Deep analysis, cross-sector patterns, and the frameworks that help professional services firms make the crossing.