Why Your AI Tools Are Making You Slower — and What to Do About It
Published: April 29, 2026 | By: The Crossing Report
Here's a statement that will feel familiar if you've been using AI tools for the past year or two: your AI tools are working, and your firm is slower.
Both things are true at the same time. The AI is doing its job. The efficiency gains aren't materializing. If you've been wondering why, two data stories published this week name the mechanism — and they come from opposite ends of the professional services world.
The first is from legal AI. Ed Walters, VP of Legal Innovation at Clio, writing in Artificial Lawyer on April 29, 2026, introduced a term that puts words to something partners have been experiencing for 18 months: the verification tax. The second is from the staffing industry, where firms deployed AI hiring tools at scale starting in 2021 — and watched time-to-hire go up, not down.
Same mechanism. Different firms. One diagnosis.
The Verification Tax: What Clio's Research Found
The verification tax is what happens when partners spend more time checking AI-generated output than they saved by generating it faster.
AI drafts faster. The drafting time drops. But then something else has to happen: a senior attorney — someone with judgment and accountability — has to verify that the AI-drafted document is correct before it goes to a client. And that verification step turns out to take roughly as long as the drafting step used to.
In some cases, it takes longer. Because when you drafted the document yourself, you knew what was in it. When the AI drafts it, you have to read it as if you didn't write it, which means reading it more carefully, checking citations you didn't put in, confirming reasoning you didn't develop. The AI compresses your drafting time. It doesn't compress your verification responsibility.
The result is a law firm that bought AI drafting tools, watched its associates use them, and found that partner review time increased to compensate. Net time saved: near zero. Net cost added: the tool license, the training time, and the partner hours now spent reviewing AI output instead of reviewing associate drafts.
Walters coined the term "verification tax" to describe this dynamic because it functions exactly like a tax: it's a structural cost that accrues alongside every AI efficiency gain, whether or not you plan for it.
The Staffing AI Paradox: Same Problem, Different Pipeline
Eight hundred miles away from any law firm, staffing companies deployed AI at a faster rate than almost any other professional services sector. Automated screening, AI-scored assessments, intelligent shortlisting — by 2024, 37% of organizations had integrated AI into some part of their hiring process.
Time-to-hire went up 24%.
The average time-to-hire was 33 days in 2021. By 2024 it was 41 days. During the same period, AI investment in hiring surged. The tools are working — AI screening genuinely processes more candidates faster than manual review. So why is hiring slower?
Because the bottleneck was never resume screening.
AI made screening faster, then handed the candidate off to a scheduling tool (separate system), which handed off to a video interview platform (another separate system), which handed off to a scoring platform (another separate system), which handed off to a hiring manager review queue (usually email). At each handoff, context has to be reconstructed. Candidates wait. Forty-two percent of candidates dropped out specifically because scheduling after AI screening took too long.
Hires per recruiter declined from approximately 7 per quarter in 2021 to 5.4 per quarter in 2024 — a 23% drop during the AI investment surge. The average organization now runs 26 HR tech modules, up from 10 in 2020, with 50% overlap in functionality.
More tools. More stages. More handoffs. Slower firm.
This is the staffing AI paradox: AI made one stage of the pipeline faster and made the whole pipeline slower by adding friction at every handoff between stages.
Why Local Efficiency Doesn't Add Up to Firm Efficiency
The mechanism in both cases is the same, and it applies equally to accounting firms, consulting practices, and marketing agencies.
Here's how to think about it:
Local efficiency means one task in your workflow gets done faster. AI drafts faster. AI screens faster. AI processes faster. This is real — these efficiency gains are measurable at the task level.
Firm efficiency means the time from "client engagement begins" to "deliverable delivered" goes down. This is what you actually care about. This is what determines your capacity, your margins, and how many clients you can serve.
Local efficiency becomes firm efficiency only when it flows forward without friction. If AI compresses drafting time but creates a verification bottleneck, or compresses screening time but creates a scheduling bottleneck, local efficiency is captured by the bottleneck. The firm runs at the speed of its slowest stage — and adding AI to every stage except the slowest one doesn't change that.
The accounting version: AI processes returns faster, but review checklists got longer because the AI can make errors senior staff weren't making before. The bottleneck was never data entry. It was review.
The consulting version: AI generates research faster, but senior consultants now spend more time validating AI research than they used to spend doing research. The bottleneck was never information retrieval. It was judgment.
The staffing version: AI screens candidates faster, but time-to-hire went up because the bottleneck was never resume review. It was coordination and scheduling.
Same pattern. Different firms. One diagnostic question: Where in your workflow does work actually stop before it moves forward?
Three Firm Types, One Diagnostic
Before you buy another AI tool, run this diagnostic on your current workflow. It takes about an hour.
Map your workflow stages end to end. Start with the moment client work enters your process and end with the moment the deliverable leaves. List every stage in between. For most professional services firms, this is 6–12 stages.
Identify every handoff. A handoff is any moment when work passes from one person, system, or tool to the next. Mark every handoff on your list.
For each handoff, answer two questions: How long does work sit at this handoff? Does the recipient have to reconstruct context that the sender had?
Find your bottleneck. It's almost always one stage — the stage where the most work piles up and waits. For most firms, the bottleneck is not in the AI-assisted stages. It's in the human-judgment stages downstream.
Before adding a tool, ask whether the bottleneck is a tool problem or a workflow problem. Verification tax and handoff friction are workflow problems. More AI tools don't solve them. Fewer, better-connected tools solve them — or, more often, workflow changes that reduce handoff friction solve them without new tools at all.
The Integration Audit: Five Steps
If your diagnostic reveals the bottleneck is at a handoff — work stops moving because of friction between systems or stages — here's how to audit your integration layer.
Step 1: List every system involved in your end-to-end workflow. Include your AI tools, your project management system, your document storage, your communication platforms, and any client-facing portals.
Step 2: For each pair of adjacent systems, note whether context transfers automatically or requires a human to reconstruct it. "The AI screening result automatically populates the scheduling tool" = low friction. "Someone has to copy the AI summary into the scheduling tool before it can be sent to the hiring manager" = high friction.
Step 3: Rank your handoffs from highest to lowest friction. The highest-friction handoffs are your candidates for elimination or automation.
Step 4: For each high-friction handoff, determine whether the friction comes from a missing integration (the two tools don't talk to each other), a process gap (no one decided who's responsible for this step), or a verification requirement (someone has to check something before it moves forward).
Step 5: Address verification requirements last — they exist for a reason. Address missing integrations first, process gaps second. Most firms find they can eliminate 2–3 high-friction handoffs without buying a single new tool.
What "Fewer Tools, Better Connected" Actually Looks Like
The firms that are actually getting faster from AI have something in common that surprised researchers: they have fewer AI tools than their peers.
Not more. Fewer.
The reason isn't that they're behind on AI adoption. The reason is that they stopped buying point solutions — tools that do one thing in isolation — and started asking whether their existing tools could be connected to pass work forward without friction.
A law firm that uses a single AI platform for research, drafting, and review — even if each component is slightly less capable than the best standalone tool for each function — moves faster than a firm with the best research tool, the best drafting tool, and the best review tool that don't talk to each other.
A staffing firm that runs screening, scheduling, and candidate communication through a connected system — even a less sophisticated one — hires faster than a firm running three best-in-class tools that require human handoffs between every stage.
This is not an argument against capability. It's an argument against fragmentation. The verification tax isn't solved by adding a verification tool. It's solved by asking: which of our current tools already has the verification capability we need, and can we route work through that tool instead of adding another one?
For most professional services firms, the answer is yes — and the audit takes an afternoon, not a procurement cycle.
For a deeper look at the operational risk when AI agents get things wrong silently — the companion problem to verification tax — read our piece on the ICLR Reasoning Trap.
Subscribe to The Crossing Report for weekly intelligence on the AI transformation reshaping professional services.
Frequently Asked Questions
What is the verification tax in AI?
The verification tax is the overhead cost firms pay when partners or senior staff spend more time checking and validating AI-generated output than they saved by using AI in the first place. The term was coined by Ed Walters, VP Legal Innovation at Clio, in April 2026, drawing on research into AI adoption patterns at law and professional services firms. The core finding: AI compresses drafting time, but verification time increases to compensate — often at the same rate, or faster. Net efficiency gain: near zero or negative.
Why did time-to-hire increase even as AI adoption surged?
Despite 37% of organizations integrating AI into their hiring processes, average time-to-hire increased 24% since 2021 — from 33 days to 41 days. The cause is not AI screening being slow; AI screening is faster. The cause is handoff friction between 3–7 disconnected tools in the hiring pipeline. AI screens a candidate faster, but then the candidate sits in a scheduling queue. Forty-two percent of candidates drop out specifically because scheduling after AI screening took too long. Local efficiency at one stage was consumed by friction at the next.
How does the staffing AI paradox apply to professional services firms?
The staffing AI paradox is a direct parallel to the verification tax in legal and accounting contexts. In both cases, AI tools produce local efficiency gains — faster screening, faster drafting — that are erased by downstream bottlenecks. For professional services firms, the lesson is the same: adding AI tools to one part of your workflow without addressing handoffs, integration gaps, and verification responsibilities at other stages doesn't produce firm-level efficiency. It produces firm-level frustration.
What is the difference between point solution AI and integrated AI in a professional services firm?
A point solution is an AI tool that solves one specific task in isolation — a contract drafting assistant, an AI screening tool, an AI invoice processor. Point solutions create local efficiency: that one task gets done faster. Integrated AI connects multiple workflow stages so that efficiency at one stage flows through to the next. The firms that are actually getting faster from AI have fewer tools, not more — and those tools are sequenced to pass context forward rather than requiring humans to reconstruct it at each handoff.
How do I run an AI pipeline integration audit for my firm?
Five steps: First, map every workflow stage that touches AI — list the task, the tool, and what happens when that task is complete. Second, identify every handoff between stages — specifically, where one person or tool hands output to the next. Third, for each handoff, note how long it takes and whether a human is required to reconstruct context. Fourth, identify your firm's bottleneck — the one stage that delays everything downstream. Fifth, before buying another tool, ask whether fixing that bottleneck requires a new tool or a workflow change (usually the latter).
Frequently Asked Questions
What is the verification tax in AI?
The verification tax is the overhead cost firms pay when partners or senior staff spend more time checking and validating AI-generated output than they saved by using AI in the first place. The term was coined by Ed Walters, VP Legal Innovation at Clio, in April 2026, drawing on research into AI adoption patterns at law and professional services firms. The core finding: AI compresses drafting time, but verification time increases to compensate — often at the same rate, or faster. Net efficiency gain: near zero or negative.
Why did time-to-hire increase even as AI adoption surged?
Despite 37% of organizations integrating AI into their hiring processes, average time-to-hire increased 24% since 2021 — from 33 days to 41 days. The cause is not AI screening being slow; AI screening is faster. The cause is handoff friction between 3–7 disconnected tools in the hiring pipeline. AI screens a candidate faster, but then the candidate sits in a scheduling queue. Forty-two percent of candidates drop out specifically because scheduling after AI screening took too long. Local efficiency at one stage was consumed by friction at the next.
How does the staffing AI paradox apply to professional services firms?
The staffing AI paradox is a direct parallel to the verification tax in legal and accounting contexts. In both cases, AI tools produce local efficiency gains — faster screening, faster drafting — that are erased by downstream bottlenecks. For professional services firms, the lesson is the same: adding AI tools to one part of your workflow without addressing handoffs, integration gaps, and verification responsibilities at other stages doesn't produce firm-level efficiency. It produces firm-level frustration.
What is the difference between point solution AI and integrated AI in a professional services firm?
A point solution is an AI tool that solves one specific task in isolation — a contract drafting assistant, an AI screening tool, an AI invoice processor. Point solutions create local efficiency: that one task gets done faster. Integrated AI connects multiple workflow stages so that efficiency at one stage flows through to the next. The firms that are actually getting faster from AI have fewer tools, not more — and those tools are sequenced to pass context forward rather than requiring humans to reconstruct it at each handoff.
How do I run an AI pipeline integration audit for my firm?
Five steps: First, map every workflow stage that touches AI — list the task, the tool, and what happens when that task is complete. Second, identify every handoff between stages — specifically, where one person or tool hands output to the next. Third, for each handoff, note how long it takes and whether a human is required to reconstruct context. Fourth, identify your firm's bottleneck — the one stage that delays everything downstream. Fifth, before buying another tool, ask whether fixing that bottleneck requires a new tool or a workflow change (usually the latter).
Get the weekly briefing
AI adoption intelligence for accounting, law, and consulting firms. Free to start.
Related Reading
This is the kind of intelligence premium subscribers get every week.
Deep analysis, cross-sector patterns, and the frameworks that help professional services firms make the crossing.