One brand, three different SEO strategies — depending on which provider you're trying to win.
Empirical finding from running this across multiple sites: the three major AI surfaces ground their answers on three meaningfully different source types. What wins on one barely shows up on the others. If you only measure one, you're playing a third of the actual game.
OpenAI
Web-grounded responses via the Responses API + web_search_preview tool.
Gemini
Google's Gemini models with Google Search grounding enabled.
Google AI Overview
The AI summary at the top of Google search results. Captured via SerpApi.
Three position metrics, captured per provider, every scan.
Knowing whether you "appear" isn't enough. The mention has to be in the right shape — early enough in the answer to be read, recommended explicitly enough to convert, cited authoritatively enough to drive click-through. We measure all three.
Text position
Where your brand appears in the model's free-form answer text. Earlier = more likely to be read; sometimes the only mention is the one in the first sentence.
Recommendation rank
If the model produces a recommendation list (top picks, finalists, suggestions), where do you sit? Position 1 is dramatically more valuable than position 5.
Citation rank
In the model's cited sources, where does your domain appear? First citation gets ~70% of the click-through. Citation #4 effectively gets none.
Each scan also captures the full response text and the complete list of cited sources, so you can see exactly what the model said about you and what it grounded on. No more guessing.
Define the queries. Schedule the scans. Get the reports.
1. Query design
We work with you to define a high-priority query set — the questions a real customer would ask an AI about your business. Typically 10–25 queries: identity-level ("who is X"), comparative ("X vs Y"), intent-driven ("best X in Z"), and edge cases that might attract fabrication.
2. Scheduled scans
The detector runs weekly — by default Monday morning, your timezone. Each scan runs every query through all three providers, captures responses and cited sources, computes the three position metrics, and writes the deltas (new mentions, lost mentions, position changes) to a per-site database.
3. Weekly delivered report
Friday morning you get a markdown report by email. Top of the report: anything that changed this week, with links to the actual provider responses. Below that: a summary of where you stand across all three providers on all your priority queries. Trend chart over the last 12 weeks. Recommended next interventions.
4. Recommended interventions
For each issue identified — fabrication risk, missed citation, dropped rank — the report recommends a specific content intervention. Most are short (an article, a page update, a Q&A). Some are structural (schema.org work, sameAs additions). All are scoped concretely so you can decide whether to do them yourself, hand them to us, or ignore.
Indexing latency for new long-form content is 48–72 hours. Publish the recommended fix on Monday, see the model citing it on Wednesday or Thursday — verified by the next scan, with deltas visible in the following Friday's report.
Per site. Per month. No setup fee.
Cancel any time. Historical scan data stays available for 90 days post-cancellation in case you want to re-attach later. No setup fee on any tier.
Either you've got a site we built, or you don't. Both work.
The monitoring is a standalone service — it watches whatever site you point it at. You don't need to be a rebuild client to use it. Around half of monitoring clients are running on WordPress, Shopify, Squarespace, or hand-rolled stacks; the monitoring is platform-agnostic.
Where it gets more powerful is when we built the site too. Site we built + monitoring = closed loop: we see the issue surface, we make the content change, we watch the model pick it up within 48–72 hours, we report the lift. That's the loop the Miabella case study documents.