Your Prospects Are Asking These 4 AI Questions Before They Sign With You
Published November 27, 2025 · By The Crossing Report
Published: March 15, 2026 | By: The Crossing Report | 6 min read
Summary
A consistent theme across Legalweek 2026 panels (March 9-12, New York): corporate legal clients are now requiring outside counsel to demonstrate AI capabilities in RFPs and initial pitches — not just pricing conversations. "AI literacy is now table stakes in procurement." The same shift is migrating to accounting and consulting. Here are the four questions clients are asking and how to build your answers before your next competitive pitch.
What Changed at Legalweek 2026
The biggest legal technology conference of the year runs in New York every March. Legalweek 2026 (March 9-12) had the same vendors and demonstrations as every year, but a different theme from the client-side panels.
Multiple sessions featuring corporate legal department leaders converged on one point: AI capability is now a selection criterion, not a differentiator.
The framing from one panel, reported by Law.com: "AI literacy is now table stakes in procurement."
This is a specific and meaningful statement. "Table stakes" in procurement means that firms without it don't make the shortlist — not that they lose head-to-head on points. The firms that can't answer AI questions clearly aren't being evaluated; they're being filtered out before the evaluation begins.
The questions corporate legal teams are now asking in RFPs and pitches are not abstract. They're operational.
The Four Questions
Question 1: Which AI tools do you use, and for what tasks?
This is a capability inventory question. Clients aren't asking whether you've heard of ChatGPT — they're asking whether you have AI integrated into your actual workflow and which specific tools handle which specific tasks.
The wrong answer: "We're exploring AI options" or "We use AI as appropriate."
A good answer: "We use CoCounsel for legal research on complex matters where the output needs to be citation-ready. We use Claude for first-draft document review and summarization of large document sets. We don't use AI for final work product that goes to a court without full attorney review."
The specificity is the signal. Firms that can name tools and use cases have built real practice. Firms that speak in generalities haven't.
Question 2: What is your AI policy?
This question is about governance, not just capability. Corporate legal and procurement teams have their own AI policies — increasingly, their vendor policies require outside firms to have compatible ones.
What they want to know: Is there a documented internal policy on which tasks require human review? Is that policy firm-wide or just individual practitioner discretion?
The wrong answer: "We trust our attorneys to use AI responsibly."
A good answer: "Our policy requires human review of all AI-generated content before it reaches a client. For research: a licensed attorney verifies citations. For drafts: the supervising attorney reviews the full document, not just the AI summary. We have a written policy we can share."
A written policy — even a one-page document — answers this question definitively. The firms that have thought to write one are ahead of those that haven't.
Question 3: Who reviews AI output before it reaches us?
This is the accountability question. It's related to policy but more specific: not "what's the policy" but "who actually does this."
Corporate clients have seen enough high-profile AI errors in legal filings (the 4th Circuit reprimand this month being the most recent) that they want to know the human name and title attached to AI oversight, not just the abstract policy.
The wrong answer: Any answer that doesn't name a role or protocol.
A good answer: "Every AI-generated document is reviewed by [role] before delivery. For [specific task type], the reviewing attorney is always senior to the matter level. Our engagement letters specify that we use AI assistance with human oversight — clients know before we start."
This answer also handles the disclosure question that many legal ethics authorities now require.
Question 4: How do you handle data security with AI tools?
This question has two sub-components: Where does client data go when it enters an AI tool? And who else can access it?
The minimum acceptable answer requires knowing the data handling policies of every AI tool in your workflow — not the general privacy policy, but specifically whether your client data is used to train models, whether it's retained, and whether it's accessible to the vendor.
The wrong answer: "We use leading AI tools with strong security."
A good answer: "We use enterprise-tier subscriptions for all AI tools that process client data. [Tool A] does not retain or train on client data — we use the [enterprise/business tier] which has a data processing agreement. [Tool B] is used only for non-client-specific research. We have a data handling addendum we can provide."
This level of specificity is achievable in an afternoon for any firm that has a small number of AI tools in use. It requires knowing which tier you're subscribed to and reading the relevant data handling documentation.
Why This Applies Beyond Law Firms
The Legalweek 2026 data reflects corporate legal procurement. But the pattern is not law-specific.
Accounting firms are beginning to see similar questions from enterprise clients about AI use in tax analysis, audit preparation, and financial reporting. Consulting firms are fielding questions about AI use in research synthesis and deliverable preparation. The common thread: enterprise clients have their own AI governance requirements and they want assurance that their advisors' AI practices are compatible.
For firms serving small-to-mid-size business clients, this shift is 12-24 months away from becoming a procurement standard. But the positioning value is available now: a small law firm or accounting firm that can answer these four questions confidently in a first meeting with a prospective client is differentiating from competitors who can't — not because clients demanded it, but because the firm was ready.
Building Your AI Positioning in a Day
Most professional services firms that have been using AI tools for 6+ months have more to say than they've articulated. The gap is documentation, not practice.
Step 1 — Inventory (60 minutes). List every AI tool currently in use at your firm. For each: what task it performs, which subscription tier you're on, and whether client data enters the tool.
Step 2 — Review protocol (30 minutes). For each tool, document who reviews AI output before it goes to a client and what they're checking. This is your human oversight protocol.
Step 3 — Policy document (60 minutes). Write a one-page AI use policy from the inventory and review protocol. Include: which tools are approved, which tasks require human review, and how client data is handled. Have a partner sign it.
Step 4 — Pitch language (30 minutes). Write two paragraphs for use in RFP responses and pitches: one describing what AI tools you use and for what, one describing your oversight and data security approach.
That's your AI positioning. It's honest, specific, and answerable. It's what the firms winning competitive evaluations have ready.
Your Move This Week
Pull up your last three client pitches or proposals. Count how many times AI capability was mentioned — either by you or by the client.
If the answer is zero: that's about to change. Your next pitch is the right time to introduce it proactively, before a prospect asks and you're caught without an answer.
Draft your two-paragraph AI positioning statement this week. If you have one or two AI tools in active use, you have enough to write it. If you don't have any — your first step is picking one tool, for one workflow, and starting.
The firms that can answer these four questions confidently aren't just winning procurement evaluations. They're demonstrating the kind of operational thoughtfulness that retains clients in a market where the alternative is a process company that can do the same work for less.
The Crossing Report covers AI adoption for professional services firm owners every Monday. Subscribe here.
Related Reading
Frequently Asked Questions
Are clients really asking about AI capability in RFPs and pitches?
Yes — this shift was confirmed by multiple panelists at Legalweek 2026 (March 9-12, New York), the largest legal technology conference in the US. The specific framing from Legalweek panelists: 'AI literacy is now table stakes in procurement.' Corporate legal departments are asking directly about AI use in research, document review, and drafting — not as a nice-to-have, but as selection criteria. The same shift is beginning in accounting and consulting: enterprise clients increasingly want to know whether their advisors are using AI and how they're managing it responsibly.
What specific questions are clients asking about AI in the hiring process?
Based on Legalweek 2026 reporting and law.com coverage, corporate clients are asking four categories of questions: (1) Which specific AI tools do you use in legal research, document review, or drafting? (2) What is your firm's policy on AI use — which tasks require human review? (3) Who reviews AI output before it reaches us as a client? (4) How do you handle data security when using AI on client matters? Firms that can answer these questions concisely and confidently are winning the comparison. Firms that dodge or express uncertainty are being filtered out.
Does this apply to small professional services firms, or only large firms?
The Legalweek 2026 data reflects corporate legal department procurement practices — which most directly affects law firms that serve corporate clients. But the pattern is migrating to accounting and consulting: enterprise clients of accounting firms are asking whether their accountants use AI for tax analysis and reporting, and consulting clients are increasingly interested in whether their advisors can use AI to accelerate research synthesis. For small firms that serve small-to-mid-size businesses (not enterprise), this shift is a 12-24 month horizon — but positioning now costs nothing, and the firms that have clear AI practice answers built first will win competitive comparisons as those clients begin asking.
What if our firm is still figuring out our AI approach — how do we answer these questions honestly?
The honest answer is better than the evasive one. If you're in early stages, something like: 'We use [specific tool] for [specific task], and we're building out our policy framework this quarter.' That's a confident, factual answer. What procurement teams are screening against is confusion or avoidance — firms that clearly haven't thought about it. Having a partial, honest AI story is far better than deflecting the question. If you have any AI tool in active use on client work, you have an answer. Name it. Describe the human review step. That's your starting position.
How do we build an AI positioning answer for our firm quickly?
Start by auditing what's already in use. Most professional services firms that have been using any AI tool for 6+ months have an informal practice that hasn't been articulated. Document: which tools are in active use, for which tasks, with what review protocol, and what data security measures apply (the tool's own privacy policy, whether client data stays in the system). That documentation, put into two paragraphs, is your AI positioning statement. It doesn't need to be impressive — it needs to be specific. Specific is what differentiates you from firms that answer 'we're exploring AI options.'