You Deployed AI Six Months Ago. Do You Know If It's Working?
Published March 15, 2026 · By The Crossing Report
Here's the question most professional services firm owners aren't asking about their AI tools: Is it working?
Not "are we using it?" — 40% of firms have deployed AI at the organizational level in 2026, up from 22% in 2025. The usage question is largely answered. The question that separates the firms compounding their advantage from the ones buying tools and hoping for the best is measurement.
The Thomson Reuters 2026 AI in Professional Services Report surveyed 1,514 professionals across 27 countries. One finding stands apart from the adoption numbers: only 18% of professional services firms formally measure whether their AI tools are delivering ROI. Another 40% of respondents don't know whether their organization even tracks it.
That means 82% of firms with deployed AI are flying blind.
Summary
The firms winning with AI in 2026 aren't just using more tools — they're closing the feedback loop. Five specific metrics tell you whether AI is working at a 5–50 person professional services firm. The measurement process takes 30 minutes per month once you've established baselines. Here's what to track and how.
Why Most Firms Don't Measure (And Why That's Costing Them)
The instinct to skip measurement is understandable. You deployed a tool, it seems to be helping, your team likes it well enough — why add overhead to track it formally?
Two reasons that matter:
First, "seems to be helping" is not a defensible position when you're making the next technology decision. The firms that measured their first AI workflow are the ones making better second and third decisions — because they know which tool produced results and which one looked good in a demo but didn't change anything. Without a baseline, every new tool decision is a guess.
Second, the client-facing metrics are where the real business case lives — and most firms are measuring the wrong thing. The Thomson Reuters data found that of firms that do measure AI ROI, 77% focus on cost savings and 64% track employee usage. Fewer than 30% track client-facing metrics: client satisfaction, client retention, or new business influenced by AI-enabled services.
For a professional services firm, which is fundamentally a relationship business, the metric that matters most is whether clients noticed. An accounting firm that cut tax prep time by 40% and didn't improve client communication or satisfaction is capturing cost savings but missing the business case. The same firm that cut prep time AND started delivering cleaner summaries and faster responses has a story to tell — and a result that justifies expanding.
The Five Metrics Worth Tracking
These are the five measurements that tell a clear story for a 5–50 person professional services firm. You don't need all five to start — pick the two that match the workflows you've deployed.
1. Time per matter or engagement
What it measures: Are AI-assisted workflows actually shorter than they were before?
How to baseline it: Pull the average time per matter type from your billing or project management system for the last 90 days. Log it. After 30 days of AI-assisted workflow, compare.
What good looks like: Firms using AI meeting summarization consistently report 30–60 minutes recovered per week in post-call documentation time. Accounting firms using AI in tax prep report 50–70% reductions in preparation time for standard returns. If you're not seeing a measurable reduction within 30 days, the workflow setup — not the tool — likely needs adjustment.
2. Revenue per client
What it measures: Are you capturing more value from the same client base — either by recovering previously unbilled time or by handling more volume per client?
How to baseline it: Average monthly revenue per active client over the last quarter. The tool to watch is billing capture: AI time-tracking tools that surface missed time entries have produced an average of 5 unbilled hours per week recovered for practitioners who adopt them. At $300/hour, that's $78,000 annually that was previously walking out the door.
What good looks like: Revenue per client should be flat or growing for the same scope of work. If you're doing more work for the same revenue after deploying AI, that's a signal that captured efficiency is benefiting the client but not the firm.
3. Client satisfaction score
What it measures: Did your service quality improve from the client's perspective — response times, communication clarity, deliverable quality?
How to baseline it: If you don't have a formal CSAT process, ask. A one-question post-engagement survey ("On a scale of 1–10, how satisfied are you with our service this month?") sent after project completion gives you a baseline. The goal isn't precision — it's a directional read over time.
What good looks like: Firms that deploy AI for client communication drafting (follow-up emails, engagement summaries, client updates) consistently see improvement in response rates and client-reported satisfaction. The clearest signal is clients who notice and mention it — "your team has been much more responsive lately."
4. Admin vs. client-facing time ratio
What it measures: Is the time your team spends on internal administration (meeting notes, billing entries, document formatting, email drafting) shrinking as a percentage of total hours?
How to baseline it: For one week, have each team member log whether each hour was client-facing work or internal administration. The ratio is your baseline. Repeat the same exercise 60 days after deploying AI workflows.
What good looks like: Professional services firms that systematically deploy AI across meeting notes, billing, and communication typically see admin time fall 20–30% within 60 days. That recovered capacity either goes to client work (revenue-positive) or helps staff work fewer hours on the same output (retention-positive). Both outcomes are measurable.
5. New matters or clients per month
What it measures: Is recovered capacity translating into growth?
How to baseline it: Track new client engagements or new matters opened per month over the last two quarters. After deploying AI workflows that free time, this metric should eventually increase — but it's the lagging indicator, not the leading one. Don't expect to see it move in the first 30 days.
What good looks like: A 5-person consulting firm that recovers 15 hours per week of capacity can take on one additional client engagement per month without adding headcount. A solo attorney who frees 30 minutes per day can respond to more new inquiries — and the data shows that response rate to new inquiries is one of the highest-leverage growth drivers in professional services (only 33% of law firms currently respond to new client inquiries at all).
The 30-Minute Monthly Measurement Process
Once your baselines are set, the monthly review is straightforward. Run it on the first business day of the month.
The four questions:
- For each active AI workflow: What was the metric last month versus baseline? (Time, revenue, satisfaction — whichever you chose per workflow.)
- What changed in the tool or workflow since last month? (New features, team adoption shifts, changes in how it's being used.)
- Is the trend directional? Improvement, flat, or declining?
- One decision for this month: Continue as-is, adjust the workflow, or try a different tool for this use case.
That's it. Four questions, one decision, 30 minutes. The decision is the output — not a report, not a presentation.
What to Do If You Haven't Been Measuring
If you deployed AI tools in the last 12 months without establishing baselines, you've lost the ability to compare before/after precisely. That doesn't mean you can't start.
Start now with proxy baselines: what you estimate your metrics were before AI, based on memory and rough data. They won't be precise, but they're directional — and directionality is enough to make better decisions.
The more important step: establish formal baselines before the next tool decision. If you're evaluating a new AI workflow or tool in the next 30 days, set the baseline first. Log the current state of the metric you expect the tool to affect. Then deploy. Then compare at 30 days.
This is the discipline that separates the 18% of firms getting clear ROI from the 82% that are adopting and hoping. The measurement isn't complex. The commitment to do it before you need the number is.
Start Here This Week
Pick one workflow you've deployed AI in during the last six months. Identify one metric that workflow was supposed to affect — time, revenue, satisfaction, or capacity. Pull the current state of that metric. Compare it to where it was before deployment (or where you estimate it was).
Write the number down.
That's the beginning of the feedback loop that turns AI from an expense into a compounding advantage.
The Crossing Report covers what professional services firm owners need to know about AI — weekly, without the hype. Subscribe free.
Related Reading
Frequently Asked Questions
How do I know if my AI tools are actually helping my firm?
Start with one metric per workflow and compare before and after. For meeting summarization: how many hours per week do you spend on post-call notes now versus six months ago? For billing capture: compare average captured hours per week this quarter versus last year. For document drafting: what's the time from intake to first draft now versus before? Pick one workflow, establish a baseline, run AI-assisted for 30 days, then measure. The Thomson Reuters 2026 AI in Professional Services Report found only 18% of firms formally measure AI ROI — meaning 82% are flying blind on whether their tools are working.
What are the most important metrics for measuring AI ROI in a professional services firm?
Five metrics worth tracking for a 5–50 person professional services firm: (1) Time per matter or engagement — is it shorter? By how much? (2) Revenue per client — are you capturing more value from the same clients? (3) Client satisfaction — are clients responding better to communications and deliverables? (4) Staff hours on admin versus client-facing work — is the ratio improving? (5) New matters or clients per month — is capacity freed up translating into growth? Most firms that do measure focus only on internal efficiency (time saved, cost reduced), but fewer than 30% track client-facing metrics. For a relationship-based business, 'did our clients notice?' is more important than 'did we save internal time?'
How long does it take to measure AI ROI for a small firm?
About 30 minutes per month, once your tracking is set up. The setup — establishing baselines on two or three metrics before you change anything — takes one session of 60–90 minutes. After that, the monthly review is comparing this period to your baseline. The most common reason small firms don't measure is that they expect it to be complex. It doesn't require software or a spreadsheet template more sophisticated than what you already use for client reporting.
What mistakes do professional services firms make when measuring AI ROI?
Three common mistakes: (1) Measuring only cost savings and internal efficiency, ignoring client-facing outcomes. A law firm might save 8 hours per matter and not notice that client satisfaction scores improved 20% — which is the more defensible business result. (2) Not establishing a baseline before deploying AI. Without a 'before' number, you can't calculate an 'after.' (3) Waiting too long to measure. At 90 days of AI use, the tools are a habit and you've lost the ability to isolate their impact. Measure at 30 days, then 60, then quarterly.
Should every AI workflow be measured the same way?
No — match the metric to the workflow's purpose. For a workflow designed to save time (meeting notes, billing capture): measure time. For a workflow designed to improve quality (contract review, document drafting): measure error rate or revision cycles. For a workflow designed to improve client response (communication drafting, follow-up automation): measure response rates or client satisfaction. Using time savings as the only metric for a quality-improvement workflow will undercount the value and lead you to underinvest.