New York's AI Bill Has a Fatal Flaw — And That's Bad News for Small Law Firms
Published November 6, 2025 · By The Crossing Report
Published: March 14, 2026 | By: The Crossing Report | 6 min read
Summary
New York Senate Bill 7263 would bar AI chatbots from providing "substantive" professional advice — including legal advice. Bloomberg Law published a pointed critique: the bill is poorly drafted, the key standard is undefined, and a broad private right of action creates plaintiff-friendly conditions. For small law firms, that analysis should not be reassuring. A vaguely drawn liability standard is worse than a clear one. This article explains the three compliance exposure scenarios firm owners need to understand — and three steps to reduce risk regardless of how the bill resolves.
The Counterintuitive Problem With a Badly Written Law
When Bloomberg Law publishes a piece titled "New York's Bid to Ban AI Chatbot Legal Advice Has Serious Flaws," the first instinct is relief. If the law is flawed, maybe it doesn't apply to us.
That's the wrong takeaway.
A well-written law with a clear standard is easier to comply with. You know what's required. You do it. You're protected.
A badly written law with an undefined standard creates the opposite: you can't be sure what it covers, the courts will define it over time through litigation, and plaintiffs' attorneys will test the outer boundaries. For small law firms using AI in client-facing contexts, that is the more dangerous environment.
NY SB 7263 would prohibit AI systems from providing "substantive responses" to requests for professional advice — including legal advice — without a licensed professional's involvement. Holland & Knight analysis identifies the compliance problem immediately: "substantive response" is not defined anywhere in the bill. The bill also includes a private right of action, meaning plaintiffs — not just regulators — can bring claims directly against firms.
The risk to a small law firm is not primarily that your AI will give clearly unauthorized legal advice. The risk is that the compliance boundary is unknown, and litigation costs are real even when you ultimately prevail.
Three Scenarios Where Your Firm May Have Exposure
Scenario 1: Your Website Has a Chatbot or Intake Form
Many small law firms use AI-powered chat tools to handle initial inquiries — answering FAQs, collecting potential client information, or providing basic "do I need a lawyer for this?" guidance.
Under NY SB 7263 as drafted, it is unclear whether automated responses to "can I contest this ticket without a lawyer?" constitute "substantive" legal advice. The bill's drafters likely intended to target pure-play AI legal advice companies. But the statutory language may reach law firm websites.
If your firm uses any AI-powered chatbot, intake tool, or automated FAQ response that touches legal questions — even obviously general ones — it is worth reviewing in light of this bill.
Scenario 2: AI-Drafted Client Communications
Your associate uses Claude to draft a client update email: "Based on the discovery timeline, your deposition is scheduled for April 8. Here's what to expect."
Is that "substantive" legal advice? It involves a legal proceeding and contains factual and procedural guidance. Under an undefined standard, a creative plaintiff's attorney could argue it is.
The safeguard is already best practice: every AI-generated client communication should be reviewed and sent by a licensed attorney, not dispatched autonomously. If your firm doesn't have that checkpoint built in, now is the time to add it.
Scenario 3: The Private Right of Action
NY SB 7263 includes a private right of action — meaning a client or third party who believes they received unauthorized AI-generated legal advice can sue directly. They don't have to file a bar complaint or wait for a regulator.
This is the feature of the bill that most concerns practitioners. Holland & Knight notes it could "generate significant litigation risk" for chatbot proprietors — and the analysis applies equally to law firms whose AI tools interface with clients in any capacity.
Even a meritless claim costs money to defend. For a 3-attorney firm, a single nuisance lawsuit over AI-generated content represents a serious financial and operational disruption.
Why This Isn't Just a New York Story
Even if NY SB 7263 fails, is amended, or is struck down on constitutional grounds, the legal principle it attempts to codify is gaining traction nationally.
The core proposition — that AI systems simulating professional judgment create unauthorized practice liability — is appearing across state legislatures:
- Illinois SB 3601 (Professional AI Oversight Act) — mandatory consumer disclosure when AI is used in professional services, including legal. Advancing in the Illinois legislature as of March 2026.
- NY A 6545 — a companion bill to SB 7263 focused specifically on chatbot impersonation of licensed professionals. Advancing separately.
- Washington state HB 1170 and HB 2225 — AI disclosure requirements for professional services, passed March 13, 2026.
A small law firm with clients in multiple states is operating in an environment where the compliance landscape is fragmenting. Waiting for New York's bill to resolve before taking action means starting from scratch in Illinois, or Washington, or Colorado.
Three Steps to Reduce Exposure This Month
These steps apply regardless of how NY SB 7263 resolves — they address the underlying liability exposure in any jurisdiction moving toward AI professional services oversight.
1. Audit your client-facing AI touchpoints.
Walk through your firm's full client interaction journey: website, intake, client portal, automated responses, AI-drafted emails. List every point where AI-generated content reaches a client or prospective client before attorney review. That list is your compliance audit starting point.
Pay particular attention to website chatbots and automated intake responses — these are the clearest category of potential unauthorized practice exposure if AI is providing guidance without attorney review.
2. Add an attorney review checkpoint before AI content reaches clients.
Every AI-generated communication that goes to a client should pass through an attorney's eyes before it's sent. This is already ABA Opinion 512 best practice (competence and communication obligations), and it's the clearest defense against unauthorized practice claims.
For firms using AI to draft client emails: a one-line review policy — "AI drafts are reviewed by the responsible attorney before sending" — should be formalized in your internal AI use policy.
3. Update your engagement letter.
Your engagement letter is your first line of defense in any malpractice or professional liability claim. It should address:
- That your firm uses AI tools in delivering legal services
- That all AI-generated work product is reviewed and validated by a licensed attorney
- That clients may request more information about AI use at any time
The ABA Opinion 512 engagement letter clause guidance, combined with your state bar's requirements, gives you the language. If you don't have a template yet, start with the North Carolina Bar's published AI policy model at ncbar.org — it covers the core elements.
The Broader Signal
The regulatory environment for AI use in professional services is not settling into clarity. It is fragmenting into state-by-state requirements, court-by-court disclosure rules, and bar ethics opinions that apply different standards.
For a small law firm, that fragmentation is the compliance challenge — not any single poorly-drafted bill. The firms that respond to this environment proactively — with documented policies, attorney review checkpoints, and updated engagement letters — will have the documentation they need when a question arises.
The firms that wait for clarity may find the first piece of clarity they get is a regulatory complaint or a lawsuit.
NY SB 7263 may or may not become law as written. The underlying liability question it raises is already here.
Related Reading
- ABA Opinion 512: What Your AI Engagement Letter Needs Now
- The 4th Circuit Sanctioned an Attorney for AI Filings — Here's the Compliance Checklist
- Texas TRAIGA: AI Compliance Checklist for Professional Services Firms
- Grammarly Got Sued for Fake Expert Reviews — The AI Impersonation Risk Every Firm Needs to Audit
- The FTC Just Defined AI Deception — What Professional Services Firms Must Do
- Oregon HB 4154: Your Clients Can Now Sue You Over Your AI Chatbot
- New Hampshire SB 640: AI Can't Provide Licensed Professional Services Without Meaningful Oversight
- Washington HB 1170: The AI Disclosure Law That Will Change What You Send to Clients
Frequently Asked Questions
What is New York Senate Bill 7263?
New York SB 7263 would prohibit AI chatbots from providing 'substantive' professional advice — including legal advice — without a licensed human professional's involvement. The bill targets unauthorized practice of law enabled by AI tools. It has been criticized by Bloomberg Law and others for vague standards: 'substantive response' is not defined, and a broad private right of action could expose firms using AI in any client-facing context to nuisance litigation.
Does NY SB 7263 affect law firms that use AI internally?
Potentially yes. The bill's 'substantive response' standard is undefined, meaning a law firm's AI-powered intake form, client FAQ chatbot, or automated engagement response could arguably fall within its scope. Holland & Knight analysis notes the bill could extend beyond pure AI chatbot products to firms using AI to facilitate client communications. Until courts define 'substantive,' any client-facing AI use is theoretically in scope.
What should small law firms do about AI liability exposure in New York?
Three steps reduce exposure regardless of whether NY SB 7263 passes: (1) audit every client-facing AI touchpoint — website chatbots, intake forms, automated responses; (2) add an attorney review checkpoint before any AI-generated content reaches a client; (3) update your engagement letter to disclose AI use and clarify that all AI output is reviewed by a licensed attorney. The ABA Opinion 512 framework applies here: competence, communication, and confidentiality obligations cover AI use regardless of state legislation.
Is NY SB 7263 likely to pass?
Its passage is uncertain, but its influence is significant. Bloomberg Law's analysis identifies serious drafting flaws that may require amendment or defeat the bill as written. However, the legal principle the bill attempts to codify — that AI simulating professional judgment creates unauthorized practice liability — is gaining traction across states. Illinois SB 3601 and at least two other pending bills follow similar reasoning. Small law firms should not wait for the NY bill's outcome before addressing their AI compliance posture.
Which states have AI laws that affect small law firms in 2026?
As of March 2026: Texas (TRAIGA, effective January 1, 2026), Colorado (AI Act/SB24-205, effective June 30, 2026), and multiple states with new court-level AI disclosure requirements for filed documents. New York SB 7263 and A 6545 are advancing but not yet enacted. Illinois SB 3601 (Professional AI Oversight Act) and Washington state HB 1170/HB 2225 (passed March 13, 2026) are additional regulatory signals. The trend is toward mandatory disclosure and professional supervision requirements for AI use in client-facing legal work.