The Sources AI Trusts Are Changing Overnight: Semrush Just Proved It!

When people think about visibility, they still think in terms of rankings, position one, page one, blue links. However, that’s not how LLMs decide what to show. They don’t rank pages; they assemble answers. And to do that, they rely on the domains they trust most.

That’s why citations matter. They’re the clearest signal of which sources an AI model sees as credible enough to shape its response.

The recent Semrush study makes this impossible to ignore. Domains that showed up consistently in AI answers one month dropped sharply the next. Others rose quickly. Entire trust patterns shifted in weeks, all based on what LLMs found reliable, useful, or authoritative for different topics.

In other words, AI isn’t just pulling information. It’s actively choosing whose voice gets included.

This blog breaks down what that means, what the study uncovered, and how brands can use these insights to strengthen their presence inside AI-generated answers.

Key Takeaways

  • AI engines choose brands through citations, not rankings. Visibility now depends on which domains a model trusts enough to pull into its answers.
  • Trust is volatile and topic-specific; the Semrush study shows Reddit and Wikipedia collapsing while publishers surge, proving authority is recalibrated weekly.
  • Each AI platform has its own trust graph, ChatGPT, Perplexity, and Google AI Mode cite different ecosystems, making visibility inherently multi-platform.
  • Interpretability drives citations, models reward structured, well-referenced, context-rich content over generic volume or domain age.
  • Brands must map and influence their category’s trusted sources by building presence across high-signal domains and optimising content for how AI constructs answers, not how Google ranks pages.

A Quick Look at the Study

Between July and October, Semrush analysed hundreds of thousands of AI-generated answers across ChatGPT, Perplexity, and Google AI Mode. Instead of looking at rankings, they focused on which domains these AI systems cited most often when constructing responses.

Each week, they tracked the top 25 cited domains per platform, revealing how often specific websites appeared as sources inside AI answers. Over 13 weeks, this created a clear pattern of which domains AI trusted, when that trust shifted, and how each platform built its own reference ecosystem.

The result is a real-time view of how AI models form their understanding of credible information and which domains influence the answers users see.

What the Study Showed

The Semrush data exposed some major shifts in how AI models choose their sources, and the numbers were sharp enough to signal a structural change in trust.

Reddit’s dominance collapsed

At the start of the study, Reddit appeared in nearly 60% of ChatGPT’s citations. By mid-September, that dropped to around 10%. A 5x collapse in presence within a matter of weeks shows how quickly AI can rebalance its dependency on user-generated content.

Wikipedia also saw a steep decline

Wikipedia citations fell from ~55% to below 20% on ChatGPT. The decline wasn’t as dramatic as Reddit’s, but it clearly signaled that LLMs were diversifying away from encyclopedic, single-source references.

Traditional publishers surged

As UGC citations fell, publisher-driven content started appearing more frequently in ChatGPT outputs. Domains like:

  • Forbes
  • Medium
  • Major news sites saw noticeable increases in citation frequency

Suggesting that structured editorial content became more influential in the model’s answer construction.

Each platform revealed a different trust ecosystem

  • ChatGPT showed the largest volatility, especially after mid-September.
  • Google AI Mode kept a more balanced mix, consistently citing LinkedIn, YouTube, Google domains, publisher sites, and structured-info sources.
  • Perplexity remained the most stable, with smaller shifts across the 13 weeks and a consistent mix of community, publisher, and reference domains.

Topic-specific trust clusters started becoming visible

The fluctuations weren’t random. AI engines appeared to adjust citations based on the,

  • nature of the query,
  • depth of information required,
  • and reliability of topic-specific sources.

This is where the study moved from “interesting” to “strategically important”: it highlighted a shift from broad domain authority to topical authority.

What These Shifts Actually Mean

The movements in this study aren’t random fluctuations; they’re signals of how AI is redefining authority.

1. Authority is becoming dynamic, not inherited

A domain can no longer rely on historical credibility or domain age. When Reddit can fall 5x in a few weeks, it shows that LLMs recalibrate trust continuously. Brands must assume their perceived authority is fluid and earned repeatedly.

2. Topical authority is overtaking domain authority

The drop in all-purpose giants like Reddit and Wikipedia, paired with the rise of publisher and expert content, indicates that LLMs increasingly value subject-specific depth. This means:

You’re authoritative only where your content demonstrates expertise, not everywhere your domain exists.

3. The answer-first world requires diversified evidence

LLMs don’t want a single perspective anymore. The shift toward:

  • news sites,
  • expert blogs,
  • research-driven platforms,
  • structured educational sources

shows that AI is constructing multi-angle answers, not echoing one dominant source.

ChatGPT, Perplexity, and Google AI Mode are no longer converging. They’re diverging, each with a distinct citation universe. This marks the end of a single visibility strategy. Brands will need platform-specific AI visibility playbooks.

What Brands Should Do Now

If authority is becoming dynamic, contextual, and platform-specific, the way brands build visibility has to evolve too. The opportunity is clear: AI is still shaping its trust graph, and brands that adapt early will lock in long-term visibility.

Map the domains AI already trusts in your niche

Not the global top domains, your topic-level trust cluster. Every industry has its own set of sources LLMs cite most often.

Your first step: Identify the 20–50 domains frequently referenced for your topics across ChatGPT, Google AI Mode, and Perplexity. This becomes your citation footprint.

Build presence on those high-signal domains

AI citation is a reflection of human authority signals. You earn visibility by being mentioned where AI already looks. Tactical ways to do this:

  • contribute expert articles,
  • get quoted in industry pieces,
  • publish on respected niche platforms,
  • strengthen PR placements,
  • collaborate with authoritative creators or analysts.

The goal: Embed your brand inside the ecosystems AI trusts.

Optimise your content for interpretability, not just indexing

LLMs don’t just crawl, they interpret. The domains that gained share had clear semantic structure and clear topical depth.

Strengthen your content by:

  • using clean headings and hierarchy,
  • embedding external citations,
  • adding structured explanations,
  • avoiding generic filler,
  • linking relevant supporting pages,
  • clarifying context and intent per page.

This makes your content not just indexable, but cite-worthy.

Develop platform-specific AI visibility strategies

ChatGPT citations ≠ Perplexity citations ≠ Google AI Mode citations.
A single approach won’t work.

Build separate visibility playbooks by:

  • analysing each platform’s citation universe,
  • identifying your gaps,
  • understanding which content types each model rewards,
  • adapting tone, structure, and placement accordingly.

AI search is now multi-channel. Your strategy has to be, too.

Monitor citation volatility as a core KPI

Just as SEO teams track rankings, AI visibility now needs its own metrics. Questions to answer monthly:

  • Which domains gained influence?
  • Which ones did we lose ground to?
  • Where is topical authority shifting?
  • What new domains are emerging as credible?
  • How do different AI engines perceive our category?

Treat AI visibility as a long-term advantage, not a tactic

The trust graph being reshaped today will influence how AI answers questions tomorrow. Brands that invest now in structured, credible, well-referenced content will benefit disproportionately when models stabilise.

This is the opportunity window.

AI isn’t just generating answers, it’s deciding whose voice gets carried forward. And as the study shows, that decision is fluid. Authority rises and falls in weeks, trust clusters shift constantly, and every platform builds its own reference universe.

This is exactly where we can help youl. ReSO helps brands see what AI sees:

  • Which domains shape your category
  • Where your brand appears (or doesn’t)
  • How often competitors get cited
  • Which sources influence the specific prompt
  • And inferring all of the above, what steps will increase your presence inside AI-generated answers

AI search is no longer just about ranking. It’s about earning trust in the places AI already listens to.

And as Rand Fishkin said, “The best way to sell something is to earn the awareness, respect, and trust of those who might buy.” Today, that trust is expressed through citations.

Glossary

AI Citations: References to external domains that AI models use when constructing answers. Citations indicate which sources the model trusts and whose information shapes the final response.

Citation Volatility: The rate at which an AI model’s trusted sources rise or fall over time. High volatility means the model is frequently recalibrating which domains it considers credible.

Trust Graph: The internal network of domains, sources, and signals an AI engine relies on to generate answers. Each platform (ChatGPT, Perplexity, Google AI Mode) has a distinct trust graph.

Source Weighting: How heavily an AI engine relies on certain domains when synthesizing an answer. Weighting varies by platform and can shift based on recency, structure, or topical relevance of content.

Evidence Stack: The combination of publisher content, expert commentary, structured references, and supporting materials that help AI validate and include a source in its answer.

Swati Paliwal

Swati, Founder of ReSO, has spent nearly two decades building a career that bridges startups, agencies, and industry leaders like Flipkart, TVF, MX Player, and Disney+ Hotstar. A marketer at heart and a builder by instinct, she thrives on curiosity, experimentation, and turning bold ideas into measurable impact. Beyond work, she regularly teaches at MDI, IIMs, and other B-schools, sharing practical GTM insights with future leaders.