Google’s AI Overviews may be relying on YouTube more than official medical sources when answering health questions, according to new research from SEO platform SE Ranking.
The study analyzed 50,807 German-language health prompts and keywords, captured in a one-time snapshot from December using searches run from Berlin.
The report lands amid renewed scrutiny of health-related AI Overviews. Earlier this month, The Guardian published an investigation into misleading medical summaries appearing in Google Search. The outlet later reported Google had removed AI Overviews for some medical queries.
What The Study Measured
SE Ranking’s analysis focused on which sources Google’s AI Overviews cite for health-related queries. In that dataset, the company says AI Overviews appeared on more than 82% of health searches, making health one of the categories where users are most likely to see a generated summary instead of a list of links.
The report also cites consumer survey findings suggesting people increasingly treat AI answers as a substitute for traditional search, including in health. It cites figures including 55% of chatbot users trusting AI for health advice and 16% saying they’ve ignored a doctor’s advice because AI said otherwise.
YouTube Was The Most Cited Source
Across SE Ranking’s dataset, YouTube accounted for 4.43% of all AI Overview citations, or 20,621 citations out of 465,823.
The next most cited domains were ndr.de (14,158 citations, 3.04%) and MSD Manuals (9,711 citations, 2.08%), according to the report.
The authors argue that the ranking matters because YouTube is a general-purpose platform with a mixed pool of creators. Anyone can publish health content there, including licensed clinicians and hospitals, but also creators without medical training.
To check what the most visible YouTube citations looked like, SE Ranking reviewed the 25 most-cited YouTube videos in its dataset. It found 24 of the 25 came from medical-related channels, and 21 of the 25 clearly noted the content was created by a licensed or trusted source. It also warned that this set represents less than 1% of all YouTube links cited by AI Overviews.
Government & Academic Sources Were Rare
SE Ranking categorized citations into “more reliable” and “less reliable” groups based on the type of organization behind each source.
It reports that 34.45% of citations came from the more reliable group, while 65.55% came from sources “not designed to ensure medical accuracy or evidence-based standards.”
Within the same breakdown, academic research and medical journals accounted for 0.48% of citations, German government health institutions accounted for 0.39%, and international government institutions accounted for 0.35%.
AI Overview Citations Often Point To Different Pages Than Organic Search
The report compared AI Overview citations to organic rankings for the same prompts.
While SE Ranking found that 9 out of 10 domains overlapped between AI citations and frequent organic results, it says the specific URLs frequently diverged. Only 36% of AI-cited links appeared in Google’s top 10 organic results, 54% appeared in the top 20, and 74% appeared somewhere in the top 100.
The biggest domain-level exception in its comparison was YouTube. YouTube ranked first in AI citations but only 11th in organic results in its analysis, appearing 5,464 times as an organic link compared to 20,621 AI citations.
How This Connects To The Guardian Reporting
The SE Ranking report explicitly frames its work as broader than spot-checking individual responses.
“The Guardian investigation focused on specific examples of misleading advice. Our research shows a bigger problem,” the authors wrote, arguing that AI health answers in their dataset relied heavily on YouTube and other sites that may not be evidence-based.
Following The Guardian’s reporting, the outlet reported that Google removed AI Overviews for certain medical queries.
Google’s public response, as reported by The Guardian, emphasized ongoing quality work while also disputing aspects of the investigation’s conclusions.
Why This Matters
This report adds a concrete data point to a problem that’s been easier to talk about in the abstract.
I covered The Guardian’s investigation earlier this month, and it raised questions about accuracy in individual examples. SE Ranking’s research tries to show what the source mix looks like at scale.
Visibility in AI Overviews may depend on more than being the most prominent “best answer” in organic search. SE Ranking found many cited URLs didn’t match top-ranking pages for the same prompts.
The source mix also raises questions about what Google’s systems treat as “good enough” evidence for health summaries at scale. In this dataset, government and academic sources barely showed up compared to media platforms and a broad set of less reliability-focused sites.
That’s relevant beyond SEO. The Guardian reporting showed how high-stakes the failure modes can be, and Google’s pullback on some medical queries suggests the company is willing to disable certain summaries when the scrutiny gets intense.
Looking Ahead
SE Ranking’s findings are limited to German-language queries in Germany and reflect a one-time snapshot, which the authors acknowledge may vary over time, by region, and by query phrasing.
Even with that caveat, the combination of this source analysis and the recent Guardian investigation puts more focus on two open questions. The first is how Google weights authority versus platform-level prominence in health citations. The second is how quickly it can reduce exposure when specific medical query patterns draw criticism.
Featured Image: Yurii_Yarema/Shutterstock