Article

May 12, 2026

3 Research-Backed Tactics That Boost AI Citations by 40% (And the Free Skill That Applies Them For You)

Princeton, Georgia Tech, and the Allen Institute tested every common GEO tactic across 10,000 queries. Three moved the needle: adding statistics (+40%), quotations (+37%), and citing credible sources (+30%). Here's the research, the before/after examples, and a free Claude Code skill that applies all three to any page on your site.

geo skill graphic

TL;DR: Princeton, Georgia Tech, and the Allen Institute tested every common "GEO" tactic across 10,000 queries. Most did nothing. Three moved the needle — adding statistics (+40%), adding quotations (+37%), and citing credible sources (+30%). I packaged them into a free GEO skill you can drop into Claude Code and run against any page on your site. Here's the research, the tactics, and the download.

Search is splitting in two.

One half still goes to Google. The other half is being routed through ChatGPT, Claude, Gemini, Perplexity, and Google's own AI Overviews — and the rules for showing up there are not the rules you learned for SEO.

Three numbers that should be on every CMO's whiteboard:

  • 58% collapse in Position 1 CTR for keywords with AI Overviews between Dec 2023 and Dec 2025 (Ahrefs)

  • 37% of consumers now start searches with AI instead of Google

  • 60% of AI Overview citations come from URLs that don't rank in the top 20 organic results (AirOps, 2026)

That last one is the punchline. Ranking does not guarantee recommendation. AI models use parametric knowledge, fan-out queries, and abstraction layers that sit on top of retrieval. You can be invisible in Google and the most-cited source in ChatGPT for the same query. The reverse is also true.

So the question stops being "can we rank?" and becomes "when a buyer asks an AI for help, do we show up in the answer?"

That's GEO. Generative Engine Optimization. And unlike SEO, where the playbook took 15 years to mature, GEO already has peer-reviewed research telling us exactly what works.

The Research

In 2024, a team from Princeton, Georgia Tech, the Allen Institute for AI, and IIT Delhi published the first large-scale GEO study at KDD (the leading data mining conference). They tested nine optimisation tactics across 10,000 search queries and measured the change in visibility inside generative AI answers.

Most tactics did nothing. Keyword stuffing, fluency edits, easier-language rewrites — flat or negative impact. The kind of work most "AI SEO" tools still optimise for.

Three tactics outperformed everything else:

Tactic

Visibility uplift

Adding statistics

+40%

Adding quotations

+37%

Citing credible sources

+30%

These aren't marginal gains. A 40% increase in visibility inside generative engines is the difference between being mentioned in the answer and not existing in the conversation.

The good news: all three are content-level changes. No technical migration. No new tech stack. They run on whatever CMS you already have.

The bad news: most teams aren't doing any of them at scale, and the ones that are doing them are doing them inconsistently — which is why the citation share is still up for grabs.

Tactic 1: Statistics Addition (+40%)

Add specific, sourced numbers to your content.

AI models are trained to weight content that contains verifiable, cited data more heavily when constructing answers. Vague claims ("many companies struggle with...") get filtered. Specific claims with sources ("64% of B2B marketers report...") get surfaced.

What "Strong" looks like:

  • Hard numbers, not ranges or hedges

  • Source attribution inline (not just at the bottom)

  • Recent data (models bias toward recency — more on that in a moment)

  • Numbers that frame the problem, not just the solution

Before: AI search is changing how people find brands.

After: 37% of consumers now start searches with AI instead of Google, and AI referral traffic converts at 23x the rate of traditional organic (Seer Interactive / Ahrefs / Passionfruit, 2026).

The "after" version isn't longer because it's padded. It's longer because it's earning a citation. A model summarising "how is AI changing search behaviour?" can lift that sentence directly into an answer. The original can't.

Tactic 2: Quotation Addition (+37%)

Quote named experts, with attribution.

Generative models treat expert quotes the same way journalists do: as authoritative anchors that prove the claim is not just the author's opinion. Pages with attributed quotes get cited at a measurably higher rate than pages without them.

What "Strong" looks like:

  • Real people with real titles at real companies

  • Quotes that say something specific, not throat-clearing

  • Attribution in the sentence, not in a footnote

Before: Most marketing teams are underinvested in AI search optimisation.

After: "60% of AI Overview citations come from URLs that don't rank in the top 20 — most of our customers are leaving citation share on the table because they're still optimising for ranking, not recommendation," — AirOps, 2026 State of AI Search report.

The quote does two things at once: it transfers authority from a recognised source to your page, and it gives the model a clean, attributable unit to cite.

Tactic 3: Citing Credible Sources (+30%)

Reference authoritative third-party research, not just internal claims.

Models score content partly on how well it situates itself in a verifiable knowledge graph. A page that says "we believe X" is one signal. A page that says "we believe X, and here's the Stanford / Princeton / Gartner research that supports it" is several signals, all reinforcing each other.

What "Strong" looks like:

  • Linked sources to original research (not aggregator blogs)

  • Mix of academic, industry analyst, and primary data

  • Sources cited where the claim is made, not bundled at the end

Before: Updating content regularly improves how often AI tools cite it.

After: Pages updated within the last 30 days make up 76.4% of ChatGPT's most-cited content, and pages not refreshed quarterly are 3x more likely to lose citations (AirOps, 2026).

Notice the compounding effect: this single sentence applies all three tactics. Statistics. An implicit expert source. Cited credible research. This is what a GEO-optimised paragraph actually looks like.

What Most Teams Get Wrong

Two patterns I see repeatedly when I audit content for AI visibility:

They optimise for the wrong page types. Teams pour GEO effort into thought leadership posts and ignore commercial pages. The data says the opposite. "Best of" guides, comparison pages, product reviews, and category pages drive 56% of all AI citations (G2 / Radix). If your homepage and product pages don't follow GEO principles, the thought leadership work is decorative.

They forget offsite signals entirely. 85% of brand mentions in AI answers come from third-party sources (AirOps, 2026). Review platforms (G2, Trustpilot, Capterra) have a 3x higher chance of ChatGPT citation than owned content. Reddit, YouTube transcripts, and consistent author bios across the web all feed the model's picture of who you are. A site-only GEO strategy is a half strategy.

The GEO skill below accounts for both.

The Free Skill

I packaged the GEO playbook — including the three KDD tactics, the offsite signal checklist, the page-type priority order, and the freshness rules — into a Claude Code skill you can run against any URL or content draft.

It does three things:

  1. Audits content you give it against every GEO principle, scores the gaps, and suggests specific rewrites with statistics, quotes, and sources you can add

  2. Briefs new content from scratch using GEO principles, so the first draft is already optimised for AI citation

  3. Audits your brand's AI visibility by guiding you through prompt research — checking what ChatGPT, Claude, and Perplexity currently say about your brand and category, and identifying the recommendation gaps

No paid tools. No integrations. One markdown file, dropped into your .claude/commands/ folder, runs inside Claude Code (free tier available).

Download the GEO skill →

The full skill includes the research citations, the priority page-type rules, the offsite signal checklist, and the workflow logic so the audit adapts to whether you give it a URL, a draft, or a brand question.

When to Run It

Three moments where the skill earns its keep:

  • Before publishing any new commercial page. Run the audit on the final draft. If statistics, quotes, or credible sources are thin, fix them before it goes live. Publishing a commercial page that ignores all three tactics is leaving a 40% visibility uplift on the table.

  • When refreshing your existing content library. Pages updated within 30 days make up the majority of ChatGPT citations. Pick your top 10 commercial pages, run each one through the audit, refresh them, and re-publish. Recency plus the three tactics compounds.

  • When briefing writers or agencies. Share the GEO principles before the first draft. If the writer knows that "Strong" means at least three cited statistics, one attributed quote, and two credible sources per page, the first draft comes back closer to citable. Stop retrofitting GEO after publication. Build it in from the brief.

If You Need More Than a Skill

The audit tells you what's missing. Fixing it at scale — refreshing the content library, building the offsite signal strategy, embedding GEO into the editorial workflow, and tying it to pipeline — is a different problem.

That's what I do. I embed directly with marketing teams to build organic growth engines that capture AI recommendation share before competitors close the gap. First deliverables in 7 days. Range across insurance, investment banking, fintech, SaaS, eCommerce, and wellness.

If you want a second pair of eyes on where your brand currently sits in AI answers and what it would take to move, book a 15-minute pipeline review.