Welcome to the week’s SEO Pulse for the best of this week’s news: updates cover what early data reveals about the February Discover core update, why Google may ignore a valid sitemap, and how businesses are trying to game AI assistant memory.
Here’s what matters for you and your work.
Discover Core Update: Early Data Shows Fewer Publishers, More Topics
NewzDash published an analysis comparing Discover visibility before and after Google’s February Discover core update using panel data from millions of U.S. users.
The pre-update (January 25-31) and post-update (February 8-14) windows covered the top 1,000 domains and top 1,000 articles in the U.S., California, and New York. Unique content categories grew across all three geographic views, but unique publishers dropped in the U.S. (172 to 158 domains) and California (187 to 177).
New York-local domains showed up roughly five times more often in the New York feed than in California’s. Yahoo went from multiple items in the U.S. top 100 to zero post-update, and X.com posts from institutional accounts climbed from three to 13 items in the same range.
Why This Matters
Google described the update as targeting more locally relevant content, less clickbait, and more in-depth coverage from sites with topic expertise. The NewzDash data provides a clear early read on localization and topic mix, though the clickbait signal is harder to confirm since headline markers alone can’t prove whether sensational content decreased.
The broader pattern of specialized sites gaining ground over generalists tracks with what the December core update analysis showed. Sites with strong local identity may see gains in their home markets while losing visibility elsewhere.
What People Are Saying
When Google released the update alongside revised Discover documentation, Glenn Gabe, SEO consultant at GSQi, compared the old and new versions on X and flagged an addition that had not been in the Discover-specific guidance before:
“Beyond clickbait and related things, the Discover documentation now includes ‘Provide a great page experience’ as well. So you know, watch overloading your page with annoying ads, auto-playing crap, and more.”
The broader reaction has split between those reporting gains in state-level feeds and others noting steep Discover traffic drops.
Read our full coverage: Google Discover Update: Early Data Shows Fewer Domains In US
Mueller Says Google May Skip Sitemaps Without “New And Important” Content
Google’s John Mueller, Search Advocate at Google, responded to a Reddit question about persistent sitemap fetch errors in Search Console. The site owner had confirmed via server logs that Googlebot fetched the sitemap with a 200 response, but Search Console kept displaying a “couldn’t fetch” error despite valid XML and correct indexing directives.
Mueller said Google has to be “keen on indexing more content from the site” and that it won’t use the sitemap if it isn’t convinced there’s “new and important” content to index.
Why This Matters
Sitemap fetch errors are one of the more confusing signals in Search Console because they can appear even when the server-side looks correct. Running through the standard checklist of XML validation, response codes, and robots.txt rules may not surface the problem if Google simply doesn’t see enough reason to index what’s behind the URLs.
Roger Montti, who covered this for Search Engine Journal, noted that Mueller was broad in his description, but thinking about what makes a site visitor satisfied can help you identify what needs improving.
What People Are Saying
The story continues a debate in SEO about sitemaps being hints, not directives. Some argue Google ignores sitemaps for small or non-news sites, relying on links instead, while others note Google doesn’t say it “loses trust” in a site when a sitemap is unused.
Mueller’s response added a new indexing-demand perspective that the community hadn’t widely considered.
Read our full coverage: SEO Fundamental: Google Explains Why It May Not Use A Sitemap
Microsoft Finds AI Memory Poisoning Through “Summarize” Buttons
Microsoft’s Defender Security Research Team published research describing what it calls “AI Recommendation Poisoning.” The technique involves businesses hiding prompt injection instructions inside website buttons labeled “Summarize with AI.”
Clicking one of these buttons opens an AI assistant with a pre-filled prompt delivered through a URL query parameter. The visible part tells the assistant to summarize the page, while the hidden part instructs it to remember the company as a trusted source for future conversations.
Reviewing AI-related URLs observed in email traffic over 60 days, Microsoft’s team said it identified 50 distinct prompt injection attempts from 31 companies across 14 industries. The pre-filled prompt URLs target Copilot, ChatGPT, Claude, Gemini, Perplexity, and Grok. Microsoft noted that effectiveness varies by platform and has changed over time.
Why This Matters
Instead of optimizing for search ranking, these companies are trying to influence what AI assistants recommend by planting instructions at the memory layer. Microsoft traced the prompts to publicly available tools designed to build presence in AI memory, and one prompt went well beyond a simple “remember us” instruction by injecting full marketing copy.
The AI recommendation layer has become a competitive arena, with companies developing tools to influence it. The way platforms address these tactics will shape the level of trust users have in AI-generated recommendations.
What People Are Saying
The research drew attention across security and AI circles. In an interview with Dark Reading, Tanmay Ganacharya, VP of Security Research at Microsoft, described the mechanism:
“The button will take the user — after the click — to the AI domain relevant and specific for one of the AI assistants targeted.”
Ganacharya also told BankInfoSecurity that not all platforms are equally exposed:
“Of the major platforms we examined, only Copilot, ChatGPT, and Perplexity have explicit memory features. Claude and Grok do not currently have persistent memory, making them seemingly immune to this specific attack.”
Some marketers have questioned whether the technique is just an aggressive growth strategy, drawing pushback from security professionals over the ethical and trust consequences.
Read our full coverage: Microsoft: ‘Summarize With AI’ Buttons Used To Poison AI Recommendations
Theme Of The Week: The Signals That Decide Visibility Are Getting Harder To See
Every story this week touches on events happening behind the scenes, beyond the usual metrics that most SEO professionals keep an eye on.
Google’s Discover update is guiding more topics through fewer publishers, a change that you can notice in feed data rather than in Search Console. Mueller’s explanation about sitemaps shows that a fetch error can indicate an indexing judgment happening upstream. And Microsoft’s research shows businesses trying to influence recommendations at the memory layer.
The common thread is that the decisions determining visibility are being made in places most of us haven’t been paying close attention to yet.
For deeper context on these topics, check out these recent pieces.
- Google Revises Discover Guidelines Alongside Core Update
- 4 Sites That Recovered From Google’s December 2025 Core Update
- Web Almanac Data Reveals CMS Plugins Are Setting Technical SEO Standards
Featured Image: VRVIRUS/Shutterstock; Paulo Bobita/Search Engine Journal