Why We Publish Our Numbers
ChatGPT advertising is new enough that most firms want proof before committing budget. We get it. Every PI firm that has spent $200 on a lead that never picked up the phone has earned the right to be skeptical about new channels. Every firm that has been promised "page one rankings" by an SEO vendor and seen nothing after six months has earned the right to ask for data before writing a check.
This page is where we share real campaign performance data from our pilot campaigns, updated as results come in. No cherry-picked numbers. No vanity metrics. The metrics that actually matter to a law firm owner: how many leads came in, what they cost, how many became signed cases, and how that compares to what the same firm is spending on Google Ads.
Most ad agencies hide behind vague claims. "We increased leads by 300%." Increased from what? Three leads to nine? That does not help you make a budget decision. We show the raw numbers because if the channel works, the data speaks for itself. And if it does not work for a particular case type or market, we would rather tell you that upfront than take your money and hope you do not notice.
Transparency is not a marketing tactic for us. It is the only way to build trust in a channel this new. You are not going to find a decade of ChatGPT advertising case studies online. There is no industry benchmark report from WordStream or HubSpot. The data has to come from the agencies running campaigns right now, and we are one of the few doing it with a dedicated focus on personal injury.
What We Track
Every campaign we manage for a law firm is instrumented with full-funnel tracking. Here is exactly what we measure and report on:
- Impressions: How many times your ad appeared inside relevant ChatGPT conversations. This tells you the total reach of your campaign within your target geography and case type categories.
- Click-through rate (CTR): The percentage of users who saw your ad and engaged with it, either by clicking to your landing page or initiating a "Chat with Sponsor" interaction. Early CTR data from ChatGPT ads is encouraging compared to traditional display, because the ad appears in a high-attention context rather than a sidebar.
- Leads generated: Total form submissions plus phone calls, deduplicated. We do not count the same person twice. A lead is a real person who took action after seeing your ad.
- Cost per lead (CPL): Total ad spend divided by total leads. This is the number most firms care about, and it is the number we optimize against. We break this out by case type when volume allows.
- Signed cases: Leads that became paying clients. This requires integration with your intake process, which is why we connect directly to your CRM. We track the full journey from ChatGPT impression to signed retainer.
- Cost per signed case: The metric that actually matters for a PI firm. Total spend divided by signed cases. This is the number you compare against your Google Ads cost per signed case to evaluate channel performance.
- ROI vs. alternative channels: We pull the same firm's Google Ads CPL and cost per signed case for direct comparison. Same firm, same market, same case types, different channel. That is the only fair comparison.
All of this data is available to you in real time through a shared dashboard. We do not send you a PDF once a month and hope you do not ask questions. You can log in and see exactly what your campaign is doing at any point. For details on what each campaign package includes, see our pricing page.
Campaign Framework
Each case study on this page follows the same structure so you can compare across campaigns and markets:
The Challenge
What the firm was spending on other channels before engaging Answer Ads. Their monthly Google Ads budget, average cost per lead, pain points with existing vendors, and what prompted them to try ChatGPT advertising. We include context on market competitiveness and case type focus.
The Strategy
How we structured the ChatGPT campaign for this firm. Which case types we targeted, how we built the landing page, what intake integrations we configured, and the specific conversational context categories we focused on. We include targeting rationale so other firms can understand whether the approach applies to their market.
The Results
Hard numbers. Impressions, leads, cost per lead, signed cases, cost per signed case, and the direct comparison to the same firm's Google Ads performance over the same period. We include the campaign duration and total spend so you can evaluate the results in proper context.
Key Takeaways
What worked, what we would do differently, and what the results suggest about ChatGPT advertising for that particular case type and market. We do not just report the numbers, we interpret them so the next firm considering this channel can make a more informed decision.
Case study data is being added as our pilot campaigns generate enough volume for statistically meaningful results. Running a campaign for two weeks and reporting on twelve leads would not tell you anything useful. We wait until we have enough data to draw real conclusions.
Pilot Program
Every firm we work with starts with a 90-day pilot. During the pilot period, our management fee is waived entirely. You pay only the ad spend that goes directly to OpenAI. This structure exists for one reason: it eliminates the risk of trying a new channel.
You are not signing a twelve-month contract with a firm that has never run a ChatGPT campaign. You are running a controlled test with transparent tracking, real data, and zero management fees for the first three months. If the numbers work, we continue and the management fee kicks in. If the numbers do not work for your market or case type, you walk away having spent only on the ads themselves.
We can do this because we are confident in the channel. The mechanics of how ChatGPT ads work are structurally favorable for PI firms. The intent is real. The competition is almost nonexistent. The unit economics should be favorable. But "should be" is not good enough when you are the one writing the check, which is why the pilot model exists.
Pilot slots are limited. We operate on a one-firm-per-market exclusivity model, which means once a firm in your metro claims the pilot slot, it is off the table for competing firms. We are not running twenty pilots in Dallas simultaneously. We run one, prove the channel, and build from there.
What Early Data Suggests
While individual case study results will be published as campaigns mature, here is what the early directional data is showing across our pilot campaigns:
- Intent quality is high. Users who engage with car accident and injury-related ads in ChatGPT are describing real situations. These are not tire-kickers or people researching for a school paper. They are people who had something happen to them and are actively processing what to do next.
- CPL is competitive with Google Ads. Early cost per lead numbers are tracking at or below what the same firms pay on Google for equivalent case types. This is expected given the lower competition on the platform but will need more volume to validate.
- Conversion from lead to consultation is promising. Because the user has already described their situation in detail within the ChatGPT conversation, intake teams report that the initial call is more substantive. The lead arrives with context, not just a name and phone number.
- No click fraud. Unlike Google Ads, where PI firms routinely deal with competitor clicking and bot traffic, ChatGPT's ad model does not have the same vulnerability. The CPM model means you pay for impressions, not clicks, which eliminates the click fraud problem entirely.
These are directional signals, not definitive conclusions. We will update this page with specific campaign numbers as soon as we have enough data to report responsibly. If you want to see results specific to your case type and market, the fastest way is to run your own pilot.
Want to Be Our Next Case Study?
Pilot slots are limited. Get your firm's data on the board.
Book a Call →15-minute intro · No commitment