We Tested 1,000 Prompts Across 5 AI Engines. Here's What Gets Cited.
Original research - 1,000 commercial-intent prompts across ChatGPT, Perplexity, Gemini, Claude, and Copilot. Analysis of which brands get cited, how often, and why.
We ran 1,000 commercial-intent prompts across five AI engines - ChatGPT, Perplexity, Gemini, Claude, and Copilot - and analyzed which brands get cited, how often, and what signals correlate with citation frequency. This is the largest cross-engine citation analysis we are aware of, and the findings challenge several common assumptions about AI search optimization.
Research Methodology
Prompt design: 1,000 prompts across 10 B2B software categories (project management, CRM, developer tools, cybersecurity, cloud infrastructure, data analytics, HR tech, marketing automation, accounting software, and communication tools). Each prompt was a natural-language buying query: “what’s the best [category] for [use case],” “compare [brand] vs [brand],” and “recommend a [category] for [company type].”
Engines tested: ChatGPT (GPT-4o), Perplexity (Pro), Gemini (1.5 Pro), Claude (3.5 Sonnet), and Microsoft Copilot. All tests run in March 2026.
Data collected: For each response, we documented every brand mentioned, whether the brand was explicitly recommended, whether citations/sources were provided, and the sentiment of the description (positive, neutral, negative, or inaccurate).
Key Findings
Finding 1: The Top 3 Brands Capture 68% of Citations
Across all categories, the top 3 most-cited brands in each category captured an average of 68% of all citations. The #4-#10 brands shared the remaining 32%. Brands outside the top 10 were virtually never cited.
Implication: AI search is even more winner-take-most than Google search. There is no “page 2” in AI recommendations. You are either in the top 3 or you are invisible.
Finding 2: Entity Authority Is the #1 Predictor of Citation
We scored each brand on five dimensions and correlated them with citation frequency:
| Signal | Correlation with Citation Frequency |
|---|---|
| Entity authority (Wikipedia, directories, publications) | 0.84 |
| Content structure (structured data, extractable facts) | 0.71 |
| Domain authority (Moz DA) | 0.63 |
| Content recency (last update date) | 0.58 |
| Social proof (reviews, awards, community presence) | 0.52 |
Entity authority - measured by Wikipedia/Wikidata presence, industry directory listings, and authoritative publication mentions - was the strongest predictor. Domain authority, while still relevant, was a weaker signal than expected.
Finding 3: Perplexity Cites 3x More Sources Than ChatGPT
Citation behavior varies dramatically by engine:
| Engine | Avg. Sources Cited per Response | Citation Style |
|---|---|---|
| Perplexity | 6.2 | Explicit inline citations with links |
| Copilot | 3.8 | Footnote-style citations |
| Gemini | 2.1 | Occasional source references |
| ChatGPT | 1.4 | Rarely cites sources explicitly |
| Claude | 0.8 | Almost never cites external sources |
Implication: Perplexity is the most citation-rich engine - and the best testing ground for GEO effectiveness. If you can get cited by Perplexity, you are generating visible, trackable results.
Finding 4: Structured Data Doubles Citation Likelihood
Brands with comprehensive structured data (Organization + Product/SoftwareApplication + FAQ schemas) were cited 2.1x more often than brands with no structured data or only basic Organization schema.
The most impactful schema types for citation:
- Product / SoftwareApplication - explicitly defines what your product is
- FAQ - provides extractable question-answer pairs
- Organization with sameAs - connects your entity to external references
- Review / AggregateRating - provides social proof signals
Finding 5: Content Freshness Matters More Than Content Volume
Brands that updated their key pages within the last 90 days were cited 43% more often than brands with the same content quality but older last-modified dates. However, publishing more content (higher total page count) showed no correlation with citation frequency.
Implication: Update your existing authoritative pages frequently. Don’t publish more - publish better and keep it fresh.
Finding 6: 23% of AI Descriptions Contain Inaccuracies
Across all responses, 23% contained at least one factual inaccuracy about the brands mentioned - wrong features, outdated pricing, incorrect company descriptions, or misattributed capabilities. This rate was consistent across all five engines.
Implication: Brand accuracy monitoring is essential. If you’re not checking what AI engines say about you, there’s a 1-in-4 chance they’re saying something wrong.
Per-Engine Insights
ChatGPT favors well-known brands with strong entity authority. It rarely cites sources but frequently recommends specific products by name. Smaller brands need exceptional entity authority to appear.
Perplexity is the most democratic engine - it cites a wider range of sources and is more responsive to recent content. It is the best engine for newer brands to gain visibility.
Gemini is heavily influenced by Google Search signals. Brands that rank well on Google tend to appear in Gemini recommendations. This makes Gemini the engine where SEO and GEO overlap most.
Claude provides the most balanced, neutral responses but is the hardest engine to get explicit citations from. It tends to describe product categories rather than recommend specific brands.
Copilot leverages the Bing index and favors structured data heavily. Brands with comprehensive schema markup perform disproportionately well on Copilot.
What This Means for Your GEO Strategy
- Invest in entity authority first. It is the strongest predictor of citation across all engines.
- Implement structured data immediately. It is the highest-impact, lowest-effort GEO action.
- Keep your content fresh. Update key pages every 90 days, even if the changes are minor.
- Monitor accuracy. Check what AI engines say about you monthly. Fix inaccuracies proactively.
- Optimize for Perplexity first. It is the most responsive engine and provides the most visible citation results.
This research will be updated quarterly as AI engine behavior evolves.
Book a free GEO strategy call to see how your brand performs across these same queries.
Get Recommended by AI.
Book a free 30-minute GEO strategy call. We check what ChatGPT, Perplexity, and Gemini say about your product right now - and show you how to improve it.
Talk to an Expert