Welcome to this week’s Pulse: updates on rankings from December’s core update, platform responses to AI quality issues, and disputes that reveal tensions in AI-generated health information.
Early analysis of Google’s December core update suggests specialized sites gained visibility in several shared examples. Microsoft and Google executives reframed criticism of AI quality. The Guardian reported concerns about health-related AI Overviews, and Google pushed back on aspects of the testing.
Here’s what matters for you and your work.
December Core Update Favors Specialists Over Generalists
Early analysis of Google’s December core update suggests specialized sites gained visibility in examples shared across publishing, ecommerce, and SaaS.
Key facts: Aleyda Solís’s analysis found sites with narrower, category-specific strength appear to be gaining ground on “best of” and mid-funnel product terms.
Some publisher sites appeared to lose visibility on broader, top-of-funnel queries in examples shared after the rollout. In examples shared after the December 11-29 rollout, ecommerce and SaaS brands with direct category expertise appeared to outperform broader review sites and affiliate aggregators.
Why SEOs Should Pay Attention
This update highlights a trend where generalist sites face ranking pressure, especially on queries with commercial intent or specific domain knowledge. Sites covering multiple categories are affected by competition from dedicated category sites.
Google says improvements can take time to show up. Some changes can take effect in a few days, but it can take several months for its systems to confirm longer-term improvement. Google also says it makes smaller, unannounced core updates that it doesn’t typically announce.
In the examples shared so far, specialization appears to outperform breadth when queries have specific intent.
What SEO Professionals Are Saying
Luke R., founder at Adexa.io, commented on LinkedIn:
“Specialists rise when search stops guessing and starts serving intent. These shifts reward brands that live one problem, one buyer.”
AYESHA ASIF, social media manager and content strategist, wrote:
“Generalist pages used to win on authority, but now depth matters more than domain size.”
Thanos Lappas, founder at Datafunc, added:
“This feels like the beginning of a long-anticipated transition in how search evaluates relevance and expertise.”
In that thread, several commenters argued the update favors deep, category-specific content over broad coverage. Several commenters suggested domain authority mattered less than focused expertise in the examples being discussed.
Read our full coverage: December Core Update: More Brands Win “Best Of” Queries
Guardian Investigation Claims AI Overview Health Inaccuracies
The Guardian reported that health organizations and experts reviewed examples of AI Overviews for medical queries and raised concerns about inaccuracies. A Google spokesperson said many examples were “incomplete screenshots.” The spokesperson also said the vast majority of AI Overviews are factual and helpful, and that Google continuously makes quality improvements.
Key facts: The Guardian said it tested health queries and shared AI Overview responses with health groups and experts for review. A Google spokesperson said many examples were “incomplete screenshots,” but added that the results linked “to well-known, reputable sources” and recommended seeking out expert advice.
Why SEOs Should Pay Attention
AI Overviews can appear at the top of results. When the topic is health, errors carry more weight. The Guardian’s reporting also highlights a practical problem. One charity leader told The Guardian the AI summary changed when repeating the same search, pulling from different sources. That can make verification harder.
Publishers have spent years investing in documented medical expertise to meet Google’s expectations around health content. This investigation puts the same spotlight on Google’s own summaries when they appear at the top of results.
What Health Organizations Are Saying
Sophie Randall, director of the Patient Information Forum, told The Guardian:
“Google’s AI Overviews can put inaccurate health information at the top of online searches, presenting a risk to people’s health”
Anna Jewell, director of support, research, and influencing at Pancreatic Cancer UK, stated:
“If someone followed what the search result told them, they might not take in enough calories … and be unable to tolerate either chemotherapy or potentially life-saving surgery.”
The reactions reveal two concerns. First, that even when AI Overviews link to trusted sources, the summary itself can override that trust by presenting confident but incorrect guidance. Second, some reactions framed Google’s response as addressing individual examples without explaining how these errors happen or how often they occur.
Read our full coverage: Guardian Investigation: AI Overviews Health Accuracy
Microsoft CEO And Google Engineer Reframe AI Quality Criticism
Within one week, Microsoft CEO Satya Nadella published a blog post asking the industry to “get beyond the arguments of slop vs. sophistication,” while Google Principal Engineer Jaana Dogan posted that people are “only anti new tech when they are burned out from trying new tech.”
Key facts: Nadella’s blog post characterized AI as “cognitive amplifier tools” and called for “a new equilibrium” that accounts for humans having these tools. Dogan’s X post framed anti-AI sentiment as burnout from trying new technology. In replies, some people pointed to forced integrations, costs, privacy concerns, and tools that feel less reliable in day-to-day workflows. The timing follows Merriam-Webster naming “slop” its 2025 Word of the Year.
Why SEOs Should Pay Attention
Some readers may interpret these statements as an attempt to move the conversation away from output quality and toward user expectations. When people are urged to move past “slop vs. sophistication” or describe criticism as burnout, the conversation can drift away from accuracy, reliability, and the economic impact on publishers.
The practical concern is how these companies respond to user feedback versus how they frame criticism. Keep an eye out for more messaging that frames AI criticism as a user issue rather than a product- and economics-related one.
What Industry Observers Are Saying
Jez Corden, managing editor at Windows Central, wrote that Nadella’s framing of AI as a “scaffolding for human potential” felt “either naively utopic, or at worse, wilfully dishonest.”
Tom Warren, senior editor at The Verge, wrote on Bluesky that Nadella wants everyone to move beyond the arguments about AI slop, calling 2026 a “pivotal year for AI.”
The commentary reveals a gap between executive messaging about AI as a transformative technology and the user experience of AI products, which feels inconsistent or forced. Some reactions suggested the request drew more attention to the term.
Read our full coverage: Microsoft CEO, Google Engineer Deflect AI Quality Complaints
Theme Of The Week: Competing Standards
Each story this week reveals a tension between the quality standards applied to publishers and those applied to platforms’ own AI systems.
The December core update appears to put more weight on category expertise than broad coverage in the examples highlighted. The Guardian investigation questions whether AI Overviews meet the accuracy bar Google sets for health content. The Nadella messaging attempts to reframe quality concerns as user adjustment problems rather than product issues.
The week highlights a tension between the standards applied to websites and the way platforms defend their own AI summaries when accuracy is questioned.
More Resources:
Featured Image: Accogliente Design/Shutterstock