Insights

How to Track Your Restaurant's AI Search Visibility (2026): Tools, Manual Methods, and What to Measure

83% of restaurants are invisible in ChatGPT (Local Falcon, 2026). Here is the operator-grade playbook for measuring whether ChatGPT, Perplexity, AI Overviews, Claude, and Copilot actually cite your restaurant: 7 metrics, 13 query buckets, 12 tools, and a free GA4 setup that catches the 70% of AI traffic that hides as Direct.

PA
Pankaj Avhad
May 6, 2026ยท15 min read
Share:
17%cited in ChatGPT
Local Falcon, Feb 2026
Citation trackerWEEK 12
ChatGPT Cited
Perplexity Cited
Gemini Missing
Copilot Cited
Claude Missing
7 metrics tracked3 / 5 platforms cite us

TLDR

Per Local Falcon's February 2026 study (189,905 ChatGPT results), only 17% of restaurants ever appear in ChatGPT recommendations. The other 83% are completely invisible. Most operators have no way of knowing which side they sit on. This post is a measurement playbook: the 7 metrics that matter (AI Overview presence, citation rate, mention rate, share of voice, sentiment, source URL, AI referral traffic), 13 restaurant-specific query buckets, 12 tools at every price point (HubSpot AI Search Grader and Bing Webmaster Tools AI Performance are free), the manual ChatGPT, Perplexity, AI Overviews, Copilot, Claude, and Gemini check (works without a single subscription), and the GA4 channel-grouping fix that catches the 70.6% of AI traffic currently misclassified as Direct (SearchSignal, 446,405-visit dataset). Cross-links to the optimization playbook, not a duplicate of it.

The 83% problem

In February 2026, Local Falcon analyzed 189,905 ChatGPT recommendation results across US restaurants. The headline number: only 17% of restaurants ever appear in ChatGPT recommendations at all. The other 83% are completely invisible.

Most operators do not know which side they sit on. The diner who asks ChatGPT "best Italian in Austin tonight" gets back a list of 4 to 7 restaurants. If your name is not on that list, you do not get the table. There is no second page in AI search. There is no "page 2 of results." There is the answer the AI gives, and the diner moves on.

Optimization is one half of the problem. The other half is measurement. You cannot improve a number you are not tracking, and AI search is the channel where most restaurants do not even know what their number is. We already published the optimization playbook for ChatGPT, Perplexity, Claude, Google AI Overviews, and Bing Copilot. This post is the measurement companion. It walks through the 7 metrics that matter, the 13 restaurant-specific query buckets to track, the 12 tools at every price point, the manual no-tool method that works on every major AI engine, and the GA4 channel-grouping fix that catches the 70% of AI traffic currently hiding as Direct.

Operator running a manual AI-visibility check on a laptop, the no-tool baseline most restaurants skip
Operator running a manual AI-visibility check on a laptop, the no-tool baseline most restaurants skip

The 7 metrics that actually matter

Most "AI visibility" articles list every metric a tool can produce. That is not useful for an operator with 14 things on their plate. Here are the 7 that map to actual decisions:

MetricWhat it measuresHow to compute
**AI Overview presence rate**Percentage of your tracked queries that trigger an AI Overview at allTrigger queries divided by total queries tested
**Citation rate**Percentage of queries where the engine cites your domain as a source linkCited queries divided by total queries
**Mention rate**Percentage of queries where your brand is named in the answer (with or without a source link)Mentions divided by total queries
**Share of voice**Your brand mentions divided by all brand mentions across the same prompt set, times 100Computed across a competitor set you define
**Citation sentiment**Tone of the language the AI uses to describe you (positive, neutral, negative)Manual scoring or NLP, returned automatically by HubSpot AEO Grader
**Source attribution**Which of your URLs gets cited (homepage, menu, /about, blog post)Track URL per citation, available natively in Bing Webmaster Tools AI Performance
**AI referral traffic**Sessions arriving at your site from AI hostnamesGA4 custom channel group (see section 5)

Two metrics matter more than the others when you are starting out: citation rate and AI referral traffic. Citation rate tells you whether AI engines see you as a credible source. AI referral traffic tells you whether that visibility actually puts a customer at your door. Mention rate is a softer signal; some answers list 6 names without linking to any of them, and a name in a list without a source link converts at a small fraction of a name with a source link.

The 13 restaurant query buckets to track

Operators sometimes try to track every keyword they care about and end up with a 200-row tracker that nobody updates. The realistic shape is 8 to 10 queries pulled from these 13 buckets. Synthesized from AthenaHQ's restaurant playbook, Birdeye, and Reelo:

1. Brand-name queries. "Is [Your Restaurant] good?" and "[Your Restaurant] menu prices."

2. Cuisine plus city. "Best Italian restaurants in Austin," "top sushi in West Village."

3. Occasion queries. "Date night restaurant in [city]," "anniversary dinner [city]," "business lunch spots [city]."

4. Dietary queries. "Best vegan restaurant near me," "gluten-free pizza in [city]," "halal restaurants in [city]."

5. Comparison queries. "[Your restaurant] vs [Competitor]," "Joe's Pizza vs Joe's Pizzeria."

6. Meal-occasion queries. "Best brunch in [city]," "late-night food [neighborhood]," "weekend breakfast [city]."

7. Neighborhood queries. "Where to eat in [specific neighborhood]," "best restaurants near [landmark]."

8. Group / family queries. "Family-friendly restaurants [city]," "good for groups of 8 in [city]," "kid-friendly [cuisine]."

9. Ambiance queries. "Romantic restaurant [city]," "outdoor dining [city]," "quiet restaurant for a meeting [city]."

10. Price-tier queries. "Cheap eats in [city]," "fine dining [city]," "best $20 lunch [city]."

11. Delivery queries. "Best [cuisine] delivery [city]," "restaurants that deliver to [neighborhood]."

12. Specific dish queries. "Best margherita pizza in [city]," "best ramen [city]," "best burger [neighborhood]."

13. Trip / visitor queries. "Must-eat restaurants for a weekend in [city]," "where locals eat in [city]."

For a single-location operator, pick 1 query from each of buckets 1, 2, 3, 4, 6, 7, 9, and 12. That is 8 queries that span brand recognition, intent, occasion, dietary niche, neighborhood, ambience, and dish-level signals. If you have a clear competitor, add 1 from bucket 5. If you are in a tourist market, add 1 from bucket 13.

Stack of cited sources, the visual analog of the URLs an AI engine attributes back to your domain
Stack of cited sources, the visual analog of the URLs an AI engine attributes back to your domain

The manual measurement method (works on a $0 budget)

Before paying for any tool, run a manual baseline. The work below covers ChatGPT, Perplexity, Google AI Overviews, Bing Copilot, Claude, and Google Gemini. It takes about 90 minutes the first time and 30 minutes once you have it routine.

Universal preconditions

  • Use a clean browser profile (incognito or private window) per platform to avoid personalization bleed from your own browsing history.
  • Set device location explicitly. Most AI assistants now infer location from IP. If you want to test markets you do not physically sit in, pair with a VPN.
  • Run the same query 3 times per platform. AI answers are non-deterministic. Take the modal result, not a single observation.

ChatGPT (chatgpt.com)

1. Sign out, or click the speech-bubble icon top-right to start a Temporary Chat. OpenAI confirms that Temporary Chats are not saved and not used for training, which gives you a clean slate.

2. Toggle the Search tool ON in the composer. This forces live web retrieval. Without it, you may get cached training-data answers that are months out of date.

3. ChatGPT now uses your IP for "near me" queries. Use a VPN if you want to simulate a different city.

4. Anonymous testing is valid because ChatGPT search is available to free users while logged out. Logged-in Plus users see slightly different sources.

Perplexity (perplexity.ai)

1. Test with Pro Search OFF first to capture the free baseline (which is what most consumers see), then with Pro Search ON for a more aggressive comparison.

2. Perplexity has an explicit Location selector in profile settings. Set it to the city you want to simulate; you do not need a VPN.

3. Capture the source list shown beneath the answer. Perplexity links sources in 77% or more of responses, the highest cite rate of any AI engine. If Perplexity does not link to you, you have a structured-data or authority problem, not a relevance problem.

Google AI Overviews

1. Use valentin.app (free, no signup). Construct a Google search URL with the gl=us and uule location parameters. This returns localized SERPs without a VPN. It is the cleanest way to simulate "what does a diner in [your city] see right now?"

2. Trigger AI Overviews by using natural-language conversational queries instead of short keyword strings. Per Semrush's 10M-keyword study, AI Overviews appear on roughly 16% of queries (down from a July 2025 peak of 24.61%). Conversational, multi-clause queries trigger them more reliably.

3. Note whether your domain appears in the "supporting links" carousel under the AI Overview, even if the answer text does not name you. That carousel is real source attribution and counts.

Bing Copilot (copilot.microsoft.com)

1. Open Edge in InPrivate mode, or any browser in incognito.

2. Run your queries directly in Copilot's chat interface.

3. The bigger move is verifying your domain in Bing Webmaster Tools (free) and turning on the AI Performance report. This is the only free first-party AI citation report from a major engine in 2026 (more on this in section 6).

Claude (claude.ai)

1. Free Claude has web fetch (Claude-User), but only when you explicitly ask Claude to read a URL. Claude's standalone search uses Claude-SearchBot, indexed via Anthropic's documentation.

2. To force search behavior, ask Claude something like: "Search the web for the best Korean BBQ restaurants in [your city] in 2026." This pushes Claude to retrieve live results instead of relying on training data.

3. Claude's citation behavior is more conservative than ChatGPT or Perplexity. If you appear here, you are likely on solid first-party content with clean schema markup.

Google Gemini

1. Test in gemini.google.com signed-out (limited features) and signed-in.

2. For Google AI Mode (the conversational search interface inside Google.com), use the dedicated tab. AI Mode and Gemini draw from different sources, so track both. Gemini's Gen AI traffic share grew from 5.7% to 21.5% year-over-year per The Digital Bloom, making it the fastest grower of any AI engine.

The 12 tools landscape (Q2 2026 pricing, in three tiers)

The space added at least 5 new entrants in the last 6 months. Here is the field, organized by what an operator can actually afford.

Free and freemium

ToolCoveragePricingNotes
**HubSpot AI Search Grader**GPT-5.2, Perplexity, GeminiFree, no credit cardReturns a 100-point score across sentiment, presence quality, brand recognition, share of voice, market competition. Best place to start.
**Mangools AI Search Grader**8 models (ChatGPT, Perplexity, Claude, Gemini, Grok, Mistral, DeepSeek, Llama 4)Free; paid AI Search Watcher from $15.60/moThe lowest-priced paid tier in the entire space. Worth the upgrade if free runs out.
**Gumshoe AI**11 AI modelsFree tier (3 reports), then $0.10 per conversation pay-as-you-goPersona-based methodology. No per-seat fees.
**Bing Webmaster Tools AI Performance**Microsoft Copilot, Bing AI summariesFree (public preview Feb 2026)Microsoft's official first-party citation report. Tracks Total Citations, Avg Cited Pages, Grounding Queries, Page-level Data.

Operator-friendly paid ($29 to $200 per month)

ToolCoveragePricingNotes
**Otterly AI**ChatGPT, AI Overviews, Perplexity, Claude, Copilot (Gemini and AI Mode are add-ons)Lite $29, Standard $189, Premium $489, Pro $989Cheapest entry in the AI-first category.
**LLMrefs**ChatGPT, Claude, Gemini, AI Overviews, Perplexity, Grok$79/mo single plan, 50 keywords, 20+ countriesSimple keyword-cited model.
**Peec AI**ChatGPT, Perplexity, AI OverviewsStarter โ‚ฌ89, Pro โ‚ฌ199, Enterprise โ‚ฌ499Country-aware, 25 to 300+ prompts.
**Semrush AI Visibility Toolkit**ChatGPT, Gemini, Perplexity, AI Overviews, AI Mode$99/mo standalone, included in Semrush One ($199 to $549/mo)Add 50 prompts for $60/mo.
**SE Ranking AI Visibility Tracker**ChatGPT, AI Overviews, Perplexity, Gemini, plus 2 more (6 total)Essential $65, Pro $119 (annual discount 20%)Bundled with full SEO suite.

Restaurant and local-business specific

ToolCoveragePricingNotes
**Local Falcon AI Visibility**ChatGPT, AI Overviews, AI Mode, Gemini, GrokCredit packs $24.99 to $199.99/moGeo-grid heatmap. Local-first. 100 free credits on signup. **Recommended for restaurants.**
**AthenaHQ**Up to 8 LLMs (ChatGPT, Perplexity, AI Overviews, AI Mode, Gemini)From $295/moHas restaurant-vertical pages tracking breakfast / brunch / dinner / occasion query buckets.

Enterprise / agency tier (likely overkill for indie operators)

ToolCoveragePricingNotes
**Profound**ChatGPT, Perplexity, AI Overviews, Copilot, Gemini, Grok, Meta AI, DeepSeekStarter $99 (50 prompts ChatGPT only), Lite $499, Growth $399 to $5,000+Just raised $35M Series B from Sequoia. Sales-led.
**Ahrefs Brand Radar**AI Overviews, AI Mode, ChatGPT, Perplexity, Gemini, Copilot$199/mo per index add-on (requires Ahrefs from $129/mo); full coverage runs $828 to $1,148/moDraws from 250M+ real prompts.
**Bluefish AI** (formerly ZipTie)All major LLMsQuote-based, custom onlyFortune 500 focus.

For a single-location operator at $40K to $400K monthly, the recommended stack is: HubSpot AI Search Grader (free) + Bing Webmaster Tools AI Performance (free) + Local Falcon credit pack ($24.99/mo) + manual tracking spreadsheet. Total monthly cost: $24.99. Total coverage: ChatGPT, Perplexity, Google AI Overviews, AI Mode, Gemini, Bing Copilot, with geo-grid heatmaps for local intent.

GA4 setup for AI referral traffic (the channel-ordering trick)

GA4's default channel grouping does not separate AI traffic. AI sessions get bucketed under Referral or Direct, which makes them invisible at the channel level. The fix is a one-time custom channel group plus the channel-ordering step almost no operator has done.

Step-by-step

1. Open GA4 > Admin > Data Display > Channel Groups. Click Create new channel group (or edit Default).

2. Add a new channel and label it AI Search (or AI Tools).

3. Set the condition: Source matches regex with the pattern below.

text
(chatgpt\.com|chat\.openai\.com|claude\.ai|gemini\.google\.com|perplexity\.ai|copilot\.microsoft\.com|deepseek\.com|meta\.ai|grok\.com)

4. Drag the new AI Search channel ABOVE Referral in the channel ordering. GA4 evaluates channels top-down, and if AI Search sits below Referral, AI sessions get bucketed as generic referrals before the AI rule ever fires. This is the step most setup guides skip and the single biggest reason operators say their AI tracking "isn't showing anything."

5. Save the channel group.

Custom audience for AI traffic

In Admin > Audiences > New audience, set the condition to Session source matches the same regex pattern. Use this audience inside GA4 Explorations to compare AI-arrival behavior (pageviews per session, conversion rate, average session duration) against organic search and direct.

The dark-traffic problem

A SearchSignal analysis of 446,405 AI-driven visits found that 70.6% of AI traffic arrives without referrer headers and gets misclassified as Direct in GA4. Three structural causes:

1. Free ChatGPT users do not send referrer data. Logged-in Plus users sometimes do, but most consumers are on the free tier.

2. Mobile app traffic from the ChatGPT and Claude apps strips referrer headers entirely and shows up as Direct.

3. Many AI links go through redirects that obscure the source domain.

The counterintuitive finding from the same dataset: AI Direct traffic converts at 10.21% versus 2.46% for ordinary Direct. That is a 4.1x premium on what most operators dismiss as un-attributed noise. Treat any unexplained spike in Direct traffic as a probable AI signal, especially if it correlates with a content publish, a Google Business Profile update, or a Reddit mention.

ChatGPT now appends utm_source=chatgpt.com to many outbound links, which helps. You can also self-tag AI-friendly destinations on your own site by appending utm_source=ai-test to URLs you share in answers, comparison content, and llms.txt-style aggregator files, so you can verify on the back end which surfaces AI engines actually fetched.

Bing Webmaster Tools AI Performance: the free first-party report

Bing Webmaster Tools added an AI Performance report in February 2026. It is the only free first-party AI citation report from a major engine in 2026. Microsoft's documentation describes it as a view of "how publisher content appears across Microsoft Copilot, AI-generated summaries in Bing, and select partner integrations."

The report tracks four things:

1. Total Citations. Number of times your domain was cited as a source in AI-generated answers across Copilot, Bing summaries, and partner integrations.

2. Avg Cited Pages. Which of your URLs gets cited (homepage vs. menu vs. blog post vs. about page).

3. Grounding Queries. "Key phrases the AI used when retrieving content that was referenced in AI-generated answers." This is the closest thing in 2026 to a Google Search Console queries report for AI.

4. Page-level Data. Which specific URLs are getting cited and at what rate.

To turn it on: verify your domain in Bing Webmaster Tools (free, takes about 5 minutes), then look for the AI Performance tab. There is no integration setup. The report populates automatically.

This is the single highest-leverage free thing in this entire post. If you do nothing else from this article, do this.

The tracking spreadsheet template

Manual tracking only works if you write it down. Here is the column structure synthesized from Search Engine Land's DIY framework and Am I Cited:

text
Date | Platform | Query | Query Bucket | Was Brand Mentioned (Y/N) |
Position in Answer (1st mention / mid / buried) | Cited as Source URL (Y/N) |
Which page or URL cited | Competitors mentioned | Sentiment (+ / 0 / -) |
Notes on response | Screenshot link

12 columns. 8 to 10 queries. 3 platforms (ChatGPT, AI Overviews, Perplexity is the cleanest starter set). 24 to 30 rows per weekly tracking session. About 30 minutes once it is routine.

Frequency and what good looks like for restaurants

Cadence

  • Weekly or bi-weekly is realistic for manual tracking. Am I Cited recommends this cadence for operators without API automation.
  • Daily is only feasible with full API automation and is not necessary for an independent restaurant.
  • Monthly is too slow. AI Overview rates have swung from 6.49% to 24.61% to 15.69% inside a single year per Semrush's 10M-keyword study. Monthly checkpoints miss volatility windows.

Restaurant-specific benchmarks

A general "60% citation rate is strong" benchmark is calibrated to B2B SaaS and is unrealistic for local restaurants. Per Local Falcon's restaurant-specific data, here is the operator-grade scale:

  • Floor (we have a problem). Brand never cited across 10 or more tracked queries in 4 weeks. 83% of restaurants currently sit here.
  • Baseline (alive). Brand mentioned in 1 to 2 of 10 tracked queries on at least one platform.
  • Competitive. Brand mentioned in 3 to 5 of 10 tracked queries across at least 2 platforms, with at least one source-link to your domain.
  • Strong. Brand cited as a source URL on 30% or more of tracked queries on the dominant platform for your local market (currently ChatGPT, with 64.5% of Gen AI website traffic share in January 2026 per The Digital Bloom).

Where to spend optimization time first

Channel share for restaurants is concentrated:

  • ChatGPT: ~64.5% of Gen AI website traffic (January 2026, down from 86.7% a year prior).
  • Gemini: 21.5%, up from 5.7% YoY. Fastest grower; prioritize over Perplexity for restaurant intent.
  • Perplexity: smaller share but cites sources 77%+ of the time, which makes it the best platform for "did my page get cited" auditing.
  • Bing Copilot: smaller share, but free first-party data via Bing Webmaster Tools makes it the second-easiest engine to track.

If you are going to optimize for one engine first, optimize for ChatGPT. If you are going to optimize for two, add Gemini, not Perplexity.

Bottom line

You cannot improve a number you are not tracking, and AI search is the channel where most restaurants do not even know what their number is. The operator-grade move for Q2 2026 is:

1. Run the manual baseline this week. 8 queries, 3 platforms, 1 spreadsheet, about 90 minutes.

2. Verify your domain in Bing Webmaster Tools and turn on AI Performance. 5 minutes, free, the only first-party data you have.

3. Fix your GA4 channel group and drag AI Search above Referral. 15 minutes.

4. Add Local Falcon at $24.99/mo for the geo-grid heatmap on local queries. Optional after week 4.

5. Re-run the baseline weekly. Compare week over week. Track which optimization changes (schema updates, GBP photos, llms.txt edits, review velocity) move which metric.

If you do that for 8 weeks and your citation rate is still zero across 10 queries, the optimization layer is the problem, not the measurement layer. Read the optimization playbook for ChatGPT, Perplexity, Claude, Google AI Overviews, and Bing Copilot, implement the schema and llms.txt sections, then come back and re-measure.

DirectOrders ships restaurant-specific JSON-LD schema (Restaurant, Menu, MenuItem, AggregateRating where applicable), an llms.txt file at the domain root, and an AI-readable menu API on every restaurant we onboard. That handles the optimization side. The measurement side is on the operator. This post is the playbook.

Frequently Asked Questions

Open chatgpt.com, click the speech-bubble icon top-right to start a Temporary Chat (OpenAI confirms these are not saved or used for training), toggle the Search tool ON in the composer to force live web retrieval, and run your highest-intent restaurant queries: 'best [your cuisine] in [your city]', 'restaurants near [your nearest landmark]', and 'is [your restaurant name] good?'. Run each query 3 times because AI answers are non-deterministic. Record whether your name appears, whether ChatGPT links to your domain as a source, and which competitors get mentioned. Use a clean browser profile or VPN if you want to test from outside your home city, because ChatGPT now infers location from your IP for 'near me' queries.

Related resources

Related Articles

Topics:

ai-searchai-visibilityaeogeochatgptperplexitygoogle-ai-overviewsbing-copilotclaudega4restaurant-marketingrestaurant-seo

Ready to grow your direct orders?

See how DirectOrders can help your restaurant keep more revenue and own your customer relationships.