Service · Door 02 · The moat

Continuous AI visibility. Three providers. Weekly reports.

Most agencies still sell SEO as the same thing it was in 2018 — links, keywords, blue-link rankings. The actual game has moved. AI summaries above the blue links are now where customer education happens, and the rules are different on each provider. We measure all three. Continuously.

Cadence
Weekly scans + reports
Providers
OpenAI · Gemini · AI Overview
Starting
£149 / month
Standalone?
Yes — or attached to any site
01 · The three games

One brand, three different SEO strategies — depending on which provider you're trying to win.

Empirical finding from running this across multiple sites: the three major AI surfaces ground their answers on three meaningfully different source types. What wins on one barely shows up on the others. If you only measure one, you're playing a third of the actual game.

Provider 01

OpenAI

Web-grounded responses via the Responses API + web_search_preview tool.

Grounds heavily on: Google Maps Place URLs, vendor directories, complete business profiles. Looks like vendor-style optimisation.
Provider 02

Gemini

Google's Gemini models with Google Search grounding enabled.

Grounds heavily on: direct site URLs, long-form content, community sites (Reddit, Quora), industry directories. Rewards content depth.
Provider 03

Google AI Overview

The AI summary at the top of Google search results. Captured via SerpApi.

Grounds heavily on: adjacent-intent surfaces — review pages, planning blogs, partner-network mentions. Cross-vendor referral work matters here.
02 · What gets measured

Three position metrics, captured per provider, every scan.

Knowing whether you "appear" isn't enough. The mention has to be in the right shape — early enough in the answer to be read, recommended explicitly enough to convert, cited authoritatively enough to drive click-through. We measure all three.

M1

Text position

Where your brand appears in the model's free-form answer text. Earlier = more likely to be read; sometimes the only mention is the one in the first sentence.

M2

Recommendation rank

If the model produces a recommendation list (top picks, finalists, suggestions), where do you sit? Position 1 is dramatically more valuable than position 5.

M3

Citation rank

In the model's cited sources, where does your domain appear? First citation gets ~70% of the click-through. Citation #4 effectively gets none.

Each scan also captures the full response text and the complete list of cited sources, so you can see exactly what the model said about you and what it grounded on. No more guessing.

03 · How it works

Define the queries. Schedule the scans. Get the reports.

1. Query design

We work with you to define a high-priority query set — the questions a real customer would ask an AI about your business. Typically 10–25 queries: identity-level ("who is X"), comparative ("X vs Y"), intent-driven ("best X in Z"), and edge cases that might attract fabrication.

2. Scheduled scans

The detector runs weekly — by default Monday morning, your timezone. Each scan runs every query through all three providers, captures responses and cited sources, computes the three position metrics, and writes the deltas (new mentions, lost mentions, position changes) to a per-site database.

3. Weekly delivered report

Friday morning you get a markdown report by email. Top of the report: anything that changed this week, with links to the actual provider responses. Below that: a summary of where you stand across all three providers on all your priority queries. Trend chart over the last 12 weeks. Recommended next interventions.

4. Recommended interventions

For each issue identified — fabrication risk, missed citation, dropped rank — the report recommends a specific content intervention. Most are short (an article, a page update, a Q&A). Some are structural (schema.org work, sameAs additions). All are scoped concretely so you can decide whether to do them yourself, hand them to us, or ignore.

Indexing latency for new long-form content is 48–72 hours. Publish the recommended fix on Monday, see the model citing it on Wednesday or Thursday — verified by the next scan, with deltas visible in the following Friday's report.

04 · Pricing

Per site. Per month. No setup fee.

Tier
Price
For
Single siteUp to 25 priority queries
£149 / mo
One brand or one location. Most clients start here.
Three sitesMulti-brand or multi-location
£349 / mo
Group of brands, franchise/portfolio, or one client with multiple regional sites.
Custom5+ sites or bespoke surfaces
From £750 / mo
Agencies whitelabel-ing, multi-language, or rare AI surfaces (Perplexity, You.com, Bing Copilot).

Cancel any time. Historical scan data stays available for 90 days post-cancellation in case you want to re-attach later. No setup fee on any tier.

05 · Standalone or attached

Either you've got a site we built, or you don't. Both work.

The monitoring is a standalone service — it watches whatever site you point it at. You don't need to be a rebuild client to use it. Around half of monitoring clients are running on WordPress, Shopify, Squarespace, or hand-rolled stacks; the monitoring is platform-agnostic.

Where it gets more powerful is when we built the site too. Site we built + monitoring = closed loop: we see the issue surface, we make the content change, we watch the model pick it up within 48–72 hours, we report the lift. That's the loop the Miabella case study documents.

Start watching what AI says about your business.

Most monitoring engagements start with a one-off Quickstart (£950, two weeks) to set the baseline, then roll into the £149/mo monitoring from week 3. If you'd rather skip the audit and just start scanning, we can do that too — get in touch.