Texas Has an AI Law — And It's Already in Effect. Here's What Your Firm Needs to Do.

Published October 4, 2025 · By The Crossing Report

Published: March 14, 2026 | By: The Crossing Report | 7 min read


Summary

The Texas Responsible AI Governance Act (TRAIGA) has been enforceable since January 1, 2026. Most small professional services firms in Texas don't know it exists. The law applies to any business using AI in Texas — law firms, accounting firms, consulting firms, staffing agencies — and creates real compliance exposure if you haven't done the minimum. The good news: the minimum is a three-step checklist, not a six-month implementation project. And there's a statutory safe harbor available to any firm that documents alignment with the NIST AI Risk Management Framework.


The Law That Snuck Up on Texas Firms

Most conversations about AI regulation in professional services focus on Colorado, Illinois, or the EU. Texas doesn't get the same attention. That's a problem if you're a professional services firm owner in Dallas, Houston, Austin, or San Antonio.

The Texas Responsible AI Governance Act — TRAIGA — took effect January 1, 2026. It is already enforceable. The Texas Attorney General's office is the enforcement authority.

TRAIGA applies to any business that:

  • Operates an AI system in Texas, or
  • Provides products or services to Texas residents that involve an AI system

That's an expansive reach. A three-attorney family law firm in Fort Worth using ChatGPT for research drafts: covered. A ten-person accounting firm in Houston with Copilot for Microsoft 365 turned on: covered. A consulting firm in New York with Texas clients: likely covered.

The law did not create a carve-out for professional services. Lawyers, accountants, consultants, and staffing agencies are in scope.


What TRAIGA Actually Prohibits

TRAIGA's prohibitions are not aimed at typical professional services AI use. The law prohibits AI systems specifically designed to:

  • Incite or facilitate self-harm or suicide
  • Facilitate criminal activity
  • Generate child sexual abuse material (CSAM)
  • Enable AI-based social scoring by government entities

If your firm uses ChatGPT, Microsoft Copilot, Claude, CoCounsel, or QuickBooks AI — none of these tools are designed for any of those purposes. You are not going to fail a TRAIGA audit because your associate used Claude to draft a contract.

The compliance problem is not that your tools are doing something bad. The compliance problem is that you haven't documented that they aren't.

That's the gap TRAIGA creates for small professional services firms. Not deliberate misuse — the absence of documentation showing you've thought about it at all.


The NIST Safe Harbor — Your Most Practical Protection

Here's what most articles about TRAIGA don't emphasize enough: the law includes a statutory safe harbor.

If a TRAIGA violation is found at your firm, and you can demonstrate that your AI practices are aligned with the NIST AI Risk Management Framework (NIST AI RMF), you are protected from liability. Additionally, TRAIGA gives you 60 days from the date of an Attorney General notice to cure any violation before enforcement action proceeds.

This combination — documented NIST alignment plus a 60-day cure period — makes TRAIGA significantly less threatening for firms that do their minimum compliance work upfront.

The NIST AI RMF is organized around four functions: Govern, Map, Measure, Manage. For a small professional services firm, alignment doesn't mean hiring a compliance consultant or standing up a formal AI risk program. It means:

  • Govern: You have a named person responsible for AI oversight at your firm (could be a partner, the managing attorney, or the owner).
  • Map: You have a written inventory of AI tools your firm uses and the tasks they're used for.
  • Measure: You have a documented review of each tool against TRAIGA's prohibitions — confirming the vendor terms don't permit the harmful uses the law targets.
  • Manage: You have a process for reviewing new AI tools before staff adopts them.

One afternoon. Four pieces of documentation. That's what NIST alignment looks like for a 10-person firm.


The Three-Step Compliance Checklist

Step 1: AI Tool Inventory

Write down every AI tool your firm uses. Every tool — not just the ones you officially approved.

Start with what you know: your practice management software's AI features, any AI writing or research tools, Microsoft 365 Copilot, Google Workspace AI features, ChatGPT subscriptions. Then ask your staff. In most professional services firms, individual attorneys, accountants, or consultants have adopted AI tools on their own — tools the firm hasn't officially reviewed.

The 8am 2026 Legal Industry Report found that 70% of individual legal professionals use AI, but 43% of firms have no AI policy and 91% have no actively enforced policy. The practical implication: your staff is almost certainly using AI tools your firm hasn't reviewed. Those tools, used with client data, create both TRAIGA exposure and malpractice/confidentiality exposure.

Your inventory document needs:

  • Tool name
  • Vendor
  • What tasks it's used for at your firm
  • Who uses it (role, not necessarily name)
  • Whether it's been officially reviewed

Step 2: Vendor Terms Review

For each tool on your inventory, confirm the vendor's terms of service don't permit the uses TRAIGA prohibits.

You are not looking for the tool's full policy framework. You are looking for one specific thing: does this tool's design or terms permit it to facilitate self-harm, criminal activity, or the generation of harmful content?

The answer for every major professional AI tool — ChatGPT, Claude, Microsoft Copilot, Google Gemini, CoCounsel, Westlaw AI, QuickBooks AI — is no. These tools explicitly prohibit those uses in their terms.

Document this review. Write a one-paragraph memo: "We reviewed the terms of service for [Tool Name] on [Date]. The vendor's terms prohibit use of the platform to [paraphrase relevant prohibitions]. We determined this tool is compliant with TRAIGA's core prohibitions."

One paragraph per tool. Signed by whoever conducted the review. Dated.

Step 3: Document the Review

Collect the inventory and the vendor review memos in a single file — a folder in your document management system, a shared Google Doc, whatever fits your practice.

Date the file. Name the person responsible for maintaining it. Set a calendar reminder to update it annually or when you adopt a new AI tool.

That's your TRAIGA compliance documentation. It demonstrates that your firm has governed its AI use — which is exactly what the NIST AI RMF safe harbor requires.


How TRAIGA Fits the Larger Regulatory Picture

TRAIGA is not an isolated law. It's part of an accelerating wave of state AI regulation that professional services firm owners are going to have to track.

  • Illinois HB 3773 (effective January 1, 2026): AI in employment decisions. Staffing firms and any firm that uses AI-assisted screening for hiring are covered.
  • Colorado Artificial Intelligence Act (effective February 1, 2026): AI in "consequential decisions" — hiring, lending, insurance, healthcare, housing. Consulting firms that advise on these processes should review.
  • Washington HB 1170 + HB 2225 (passed March 13, 2026): AI disclosure and chatbot safety requirements. Signed into law last week.
  • Illinois SB 3601 (Professional AI Oversight Act, advancing): Would require mandatory consumer disclosure for AI use in professional services. Watch this one.

The Transparency Coalition (transparencycoalition.ai) publishes a weekly legislative tracker. If your firm operates across multiple states, bookmark it.

The pattern across all of these laws is the same: transparency and documentation. Regulators are not (yet) telling you which AI tools to use or prohibiting AI from professional services. They're saying: know what you're using, document that you've reviewed it, and be able to show your work.

The firms that will have a compliance problem in 2027 and 2028 are the ones that treat each law as an isolated event to react to. The firms that will be fine are the ones that build a minimal AI governance practice now — inventory, vendor review, documentation, designated owner — that applies across all these frameworks at once.


The Action Item This Week

If your firm is in Texas — or serves Texas clients:

  1. Build your AI tool inventory. Every tool in use. Ask your staff. Expect surprises.

  2. Review vendor terms for each tool. One paragraph per tool documenting that review. Should take 30-60 minutes for most firms.

  3. File the documentation. A dated folder with the inventory and review memos. Name the person responsible.

That's three steps. One afternoon. And you have the NIST AI RMF safe harbor in your back pocket if the AG ever comes knocking.

If you operate in Colorado or Illinois as well, the same documentation serves as your foundation for compliance there. One governance practice. Multiple laws covered.


The Crossing Report covers the transition to AI for professional services firm owners — accounting, law, consulting, staffing, and marketing agencies. Subscribe here for weekly insights on what's changing and exactly what to do next.


Related reading:

Frequently Asked Questions

What is the Texas Responsible AI Governance Act (TRAIGA)?

TRAIGA is a Texas state law that took effect January 1, 2026. It applies to any business that operates an AI system in Texas or whose products and services are used by Texas residents. The law prohibits AI systems designed to incite self-harm, facilitate criminal activity, or produce certain harmful content. It also restricts AI-based government social scoring. For professional services firms, the practical compliance obligation is: know which AI tools your firm uses, verify those tools don't violate TRAIGA's prohibitions, and document that review.

Does TRAIGA apply to a law firm, accounting firm, or consulting firm in Texas?

Yes. TRAIGA applies broadly to any business operating in Texas or serving Texas residents. A 10-attorney law firm in Houston using ChatGPT, Copilot, or CoCounsel is covered. An accounting firm in Dallas using QuickBooks AI or Copilot for Microsoft 365 is covered. A consulting firm serving Texas clients from out of state is likely covered. The law doesn't carve out professional services.

What is the NIST AI Risk Management Framework safe harbor under TRAIGA?

TRAIGA provides a safe harbor for firms that align their AI use to the NIST AI Risk Management Framework (NIST AI RMF). If a TRAIGA violation is discovered at your firm and you can demonstrate alignment with the NIST AI RMF, you're protected from liability — and you have 60 days from the date of an Attorney General notice to cure any violation. The NIST AI RMF is publicly available at nist.gov/system/files/documents/2023/01/26/AI RMF 1.0.pdf. For a small firm, alignment doesn't mean a full enterprise implementation — it means documenting your AI risk practices against the framework's four core functions: Govern, Map, Measure, Manage.

What's the minimum viable TRAIGA compliance action for a small professional services firm?

Three steps: (1) Inventory which AI tools your firm uses — all of them, including tools individual staff have adopted on their own. (2) Review each vendor's terms of service to confirm the tool doesn't violate TRAIGA's prohibitions (incitement of self-harm, criminal activity facilitation, or CSAM). Major enterprise tools from Microsoft, Google, Anthropic, and OpenAI will not. (3) Document that review in writing — a one-page memo noting what tools you use, that you reviewed the vendor terms, and the date of review. That's your baseline.

What happens if a firm is found to have violated TRAIGA?

The Texas Attorney General is the enforcement authority. If your firm receives a notice of violation, TRAIGA gives you 60 days to cure. The safe harbor protects firms that have documented NIST AI RMF alignment — demonstrating that your AI practices were governed, even if a specific violation occurred, significantly reduces your legal exposure. The law's primary targets are AI systems that are purposefully designed for harmful use, not firms that have inadvertently deployed a non-compliant vendor tool.

Is Texas TRAIGA similar to the Colorado AI Act or Illinois HB 3773?

They address related concerns but work differently. Colorado's AI Act (effective February 1, 2026) focuses on algorithmic discrimination in consequential decisions — hiring, lending, insurance. Illinois HB 3773 (effective January 1, 2026) addresses AI in employment decisions specifically. TRAIGA focuses on prohibiting specific harmful AI uses and requires firms to be able to demonstrate their AI tools don't facilitate harm. If your firm operates in multiple states, you may need to track all three. The Transparency Coalition (transparencycoalition.ai) publishes a weekly legislative tracker.

Get the weekly briefing

AI adoption intelligence for accounting, law, and consulting firms. Free to start.

Free weekly digest. No spam. Unsubscribe anytime.