Glossary
Glossary: AI visibility & GEO
The core terms around brand measurement in AI answers, defined briefly. Each entry links out to the full methodology or the matching help-center chapter.
AI Mention Rate
Share of tracked queries in the last 30 days where the brand was named in the AI answer.
AI Mention Rate is the raw hit rate: across all queries, all providers and all daily runs in the last 30 days, how often did the brand appear in the AI answer?
It is the more accessible sibling of AI Prominence — without weighting for position, provider breadth, sentiment or recommendation strength. A brand can have a high mention rate (cited often) but a moderate AI Prominence (always at the end of the answer, neutral tone, never the top pick). The two metrics complement each other.
More on this:
Help: your dashboard →
AI Overview (Google)
The AI-generated answer card above Google's classic search results. It often cites third-party sources directly and reduces clicks to the organic results.
Google's AI Overview (previously "Search Generative Experience", SGE) is the AI answer Google has been rolling out above the classic search results since 2024–2025. Unlike on ChatGPT or Perplexity, the user sees both: the AI answer and the search-results list. In practice the AI answer often resolves the question and the click into the result list never happens.
For brands this has two consequences. First, previously reliable click metrics from Google Search Console become unreliable (see "AI Overview Signal"). Second, the choice of cited third-party sources directly determines which brands appear in the AI Overview — often those are different sources from the ones that rank in the classic top-10.
More on this:
Help: managing brands →
AI Overview Signal
Queries with high impressions but unusually low clicks in Google Search Console. A hint that an AI-generated answer is capturing demand before it reaches the site.
When a query racks up impressions in GSC but barely any clicks, that is a textbook symptom of Google's AI Overview: the answer sits above the organic results, the user gets it without clicking through.
Achtung.app scans GSC data for this pattern and surfaces the affected queries as AI Overview Signals. They are the high-priority targets for GEO work — places where organic visibility looks strong on paper but practically flows into an AI card.
The signal is only visible when Google Search Console is connected to the brand.
More on this:
Help: managing brands →
AI Prominence
The single 0–100 score that summarises a brand's AI visibility. It blends six weighted signals from the tracked AI providers' answers and is recomputed daily.
AI Prominence is Achtung.app's headline metric. Six components feed in: Citation Frequency (20%), Citation Breadth across providers (20%), Recommendation Strength (25%), Position Score (15%) reflecting how early the brand appears in the answer, Sentiment of mentions (10%), and Trend Momentum over recent weeks (10%).
A score of 80 or above means Top-Promi (top-tier): the brand is cited reliably, on multiple AI providers, prominently and positively. Low scores are not a negative judgement by the AI — they simply mean the brand is not actively recommended out of the model, which is exactly the gap GEO is meant to close.
AI Prominence is derived from brand-free queries (see "Organic AI Visibility"), not from explicit brand mentions.
More on this:
Methodology: scoring →
Citation Source
A third-party site an AI model cites or links as evidence when answering a tracked query — industry publications, review platforms, directories, media outlets.
In search-grounded answers the cited sources are a central lever. Whoever shows up there shows up in the AI answer, often as a linked citation. Achtung.app collects every third-party site AI models pull in for a brand's queries and ranks them by frequency, provider breadth and opportunity.
The opportunity badge (High / Medium / Low) flags the sources that promise the biggest leverage: cited often, not yet occupied by the brand. Press outreach, guest posts, or directory listings on high-opportunity sources are among the most direct ways to lift AI visibility.
More on this:
Help: citation sources →
Consistency Score
How reliably a brand is cited across multiple daily runs on a single AI provider (0% to 100%).
Each tracked keyword is queried multiple times per AI provider per day. AI models are stochastic — the same question does not always produce the same answer. The Consistency Score measures how often the brand turns up across those repetitions: 100% means cited every run, 50% means cited half the time.
High consistency means the brand is stably anchored in the model. Low consistency means the model knows the brand but does not reliably surface it — the typical state of brands on the edge of the recommendation corridor. The score is exposed per provider because consistency between ChatGPT, Gemini and Perplexity often diverges sharply.
More on this:
Methodology: repeated runs →
GEO (Generative Engine Optimisation)
The practice of improving a brand's presence in AI-generated answers. Analogous to SEO, but for answer engines instead of search-result lists.
GEO stands for Generative Engine Optimisation. Where classic SEO optimises for ranking high in a list of links, GEO optimises for being named, cited or recommended inside the answer itself.
That shifts the levers from keyword rankings towards three things: being reachable for AI live-retrieval crawlers, being recognisable in the models' training data, and being structurally answerable for the questions real users ask. Classic SEO signals (content, authority, technical hygiene) still matter, but their weight and exact mechanics differ from pure SERP optimisation.
Achtung.app measures GEO without training it: the tool tells brands where they stand in AI answers and which third-party sources the AI currently prefers. What brands do with that — content, PR, technical fixes — stays their decision.
More on this:
What is GEO? →
GEO Gap
A tracked query where the AI answered without naming the brand. Achtung.app attaches an AI-generated explanation of *why*.
A GEO gap is the most concrete call-to-action in the system. Instead of just saying "your score is 42", it says: "On this query the AI named competitors A, B and C, but not you, because…".
Gap reasons are standalone LLM analyses, not simple pattern matching. Typical reasons: no study or documentation on the topic findable on the site; niche not unambiguously assignable; local focus that doesn't surface in the sources; competitors with concrete numbers or comparison tables that do appear in AI answers.
GEO gaps feed the weekly content suggestions: each suggestion names the cited third-party sources, the gap reason, and the structural element to publish (FAQ schema, comparison table, dated benchmark).
More on this:
Help: GEO gaps →
Joint Prominence Delta
Difference between a brand's web-search prominence and its AI prominence. Positive = stronger in classic web search; negative = stronger in AI answers.
The Joint Prominence Delta answers the question "does this brand belong more to classic web search or to AI answers?" on a normalised scale. Both prominence values are scaled to 0–1, then the formula subtracts the AI value from the web-search value.
A strongly negative delta means: the AI models recommend the brand disproportionately, even though it barely shows up in Google's result list (typical for rising SaaS players with good PR motion). A strongly positive delta means: the brand dominates classic web search but barely appears in AI answers (typical for entrenched incumbents with thin GEO work).
In industry reports the delta appears as the "Δ Joint Prom." column and as the sort key in the same-brand-asymmetry chart.
More on this:
Industry reports →
Live-Retrieval Crawler
Crawlers like OAI-SearchBot, PerplexityBot, ChatGPT-User or Claude-User that fetch pages on demand to generate live answers in ChatGPT, Perplexity or Claude.
The AI world has two crawler families with very different functions. Live-retrieval crawlers grab pages now, because a user has just asked a question whose answer needs current information. Blocking them (in robots.txt or via an edge rule at the firewall) directly removes the brand from ChatGPT, Perplexity and Claude answers.
Current live-retrieval crawlers include OAI-SearchBot, ChatGPT-User, PerplexityBot, Perplexity-User, Claude-User and Claude-SearchBot. Achtung.app's weekly website-hygiene audit checks that all of them are allowed and that no silent edge block intervenes.
Training crawlers (see next entry) are something else, and the system deliberately does not grade them.
More on this:
Help: website hygiene →
llms.txt
A text file at a site's root (`/llms.txt`) that gives LLMs a machine-readable summary of the site's contents and key facts. Analogous to robots.txt, but for AI models.
llms.txt is a recent, not-yet-standardised proposal: a compact Markdown file at the host root that tells an AI model in one place what the site is about, what topics it covers, and which sub-paths hold the canonical sources.
Adoption by major AI providers is thin as of 2026 and the demonstrable effect is unclear. The file costs nothing and signals a deliberate AI strategy, though. Achtung.app's website-hygiene audit checks its presence as an "important" check (not critical), and Achtung.app itself ships a substantial /llms.txt.
More on this:
Help: website hygiene →
Organic AI Visibility
Whether an AI model names a brand without being asked about that brand directly — i.e. on its own initiative.
Classic brand analysis tends to ask AI models directly: "What do you know about Brand X?" That method measures nothing useful, because the AI will talk about any brand you name regardless of how visible the brand actually is.
Achtung.app inverts the question. The system runs daily brand-free queries from the brand's market ("What are the best karate schools in Berlin?", "Which tools help with project management?"). The query never contains the brand name or the domain. The AI either names the brand on its own — or it doesn't.
The result is a harder but more honest metric: organic AI visibility mirrors what a real customer experiences when asking the AI for advice.
More on this:
Methodology: blind queries →
Prominence Tier
Four-tier label of the AI Prominence: Randfigur (0–19), Namhaft (20–39), Promi (40–59), Top-Promi (60–100). Cutoffs at 20, 40, 60 and 80 points.
The bare 0-to-100 number needs an anchor. The Prominence Tier provides it: four tiers borrowed from German celebrity language ("Promi" = celebrity) that hold on to the idea that AI visibility is about recognition inside a market.
| Tier |
Range |
Meaning |
| Randfigur |
0–19 |
Practically absent from AI answers |
| Namhaft |
20–39 |
Named occasionally, usually late in the answer |
| Promi |
40–59 |
Cited reliably, often recommended |
| Top-Promi |
60–100 |
Among the first brands the model reaches for |
The tiers are a UI aid; the actual value remains the 0–100 number.
More on this:
Methodology: scoring →
Search-grounded (live web search per query)
An answer mode in which the AI model runs a live web search before answering and returns the cited sources transparently. The prerequisite for reproducible AI-visibility measurement.
Search-grounded means: the AI model does not answer from its training snapshot alone. It pulls live web content and cites it auditably in the answer. ChatGPT (with web browsing enabled), Gemini, Perplexity and Claude all offer this mode; Achtung.app currently tracks the three search-grounded providers ChatGPT, Gemini and Perplexity.
The mode is the foundation of serious AI-visibility measurement for three reasons: first, it mirrors what real users see; second, the cited sources are auditable; third, pure training-data answers systematically favour older brands that appeared in more crawls (closed-book bias).
In English product copy the term usually appears with the inline gloss live web search per query.
More on this:
Methodology: search-grounded →
Training-Corpus Crawler
Crawlers like GPTBot, ClaudeBot, Google-Extended or CCBot that collect content for pretraining future model versions — not for live answers.
Training crawlers feed the pretraining of future model generations. Allowing them gives your content a chance to land as general knowledge in the next version of Anthropic's, OpenAI's or Google's models. Blocking them forgoes that lane but does not affect current visibility in live answers.
Allowing or blocking is therefore an editorial decision, not a hygiene flaw. Achtung.app reports the verdicts for GPTBot, Google-Extended, ClaudeBot, CCBot and Applebot-Extended in a dedicated row group inside the hygiene matrix, but it does not grade them and they never move the "AI-ready" badge.
More on this:
Help: website hygiene →