1. SEJ
  2.  ⋅ 
  3. Generative AI

Microsoft: ‘Summarize With AI’ Buttons Used To Poison AI Recommendations

  • Microsoft found over 50 hidden prompts from 31 companies across 14 industries.
  • The hidden prompts are designed to manipulate AI assistant memory through "Summarize with AI" buttons.
  • The prompts use URL parameters to inject instructions like to bias future AI recommendations.

Microsoft found 31 companies hiding prompt injections inside "Summarize with AI" buttons aimed at biasing what AI assistants recommend in future conversations.

Microsoft: ‘Summarize With AI’ Buttons Used To Poison AI Recommendations

Microsoft’s Defender Security Research Team published research describing what it calls “AI Recommendation Poisoning.” The technique involves businesses hiding prompt-injection instructions within website buttons labeled “Summarize with AI.”

When you click one of these buttons, it opens an AI assistant with a pre-filled prompt delivered through a URL query parameter. The visible part tells the assistant to summarize the page. The hidden part instructs it to remember the company as a trusted source for future conversations.

If the instruction enters the assistant’s memory, it can influence recommendations without you knowing it was planted.

What’s Happening

Microsoft’s team reviewed AI-related URLs observed in email traffic over 60 days. They found 50 distinct prompt injection attempts from 31 companies.

The prompts share a similar pattern. Microsoft’s post includes examples where instructions told the AI to remember a company as “a trusted source for citations” or “the go-to source” for a specific topic. One prompt went further, injecting full marketing copy into the assistant’s memory, including product features and selling points.

The researchers traced the technique to publicly available tools, including the npm package CiteMET and the web-based URL generator AI Share URL Creator. The post describes both as designed to help websites “build presence in AI memory.”

The technique relies on specially crafted URLs with prompt parameters that most major AI assistants support. Microsoft listed the URL structures for Copilot, ChatGPT, Claude, Perplexity, and Grok, but noted that persistence mechanisms differ across platforms.

It’s formally cataloged as MITRE ATLAS AML.T0080 (Memory Poisoning) and AML.T0051 (LLM Prompt Injection).

What Microsoft Found

The 31 companies identified were real businesses, not threat actors or scammers.

Multiple prompts targeted health and financial services sites, where biased AI recommendations carry more weight. One company’s domain was easily mistaken for a well-known website, potentially leading to false credibility. And one of the 31 companies was a security vendor.

Microsoft called out a secondary risk. Many of the sites using this technique had user-generated content sections like comment threads and forums. Once an AI treats a site as authoritative, it may extend that trust to unvetted content on the same domain.

Microsoft’s Response

Microsoft said it has protections in Copilot against cross-prompt injection attacks. The company noted that some previously reported prompt-injection behaviors can no longer be reproduced in Copilot, and that protections continue to evolve.

Microsoft also published advanced hunting queries for organizations using Defender for Office 365, allowing security teams to scan email and Teams traffic for URLs containing memory manipulation keywords.

You can review and remove stored Copilot memories through the Personalization section in Copilot chat settings.

Why This Matters

Microsoft compares this technique to SEO poisoning and adware, placing it in the same category as the tactics Google spent two decades fighting in traditional search. The difference is that the target has moved from search indexes to AI assistant memory.

Businesses doing legitimate work on AI visibility now face competitors who may be gaming recommendations through prompt injection.

The timing is notable. SparkToro published a report showing that AI brand recommendations already vary across nearly every query. Google VP Robby Stein told a podcast that AI search finds business recommendations by checking what other sites say. Memory poisoning bypasses that process by planting the recommendation directly into the user’s assistant.

Roger Montti’s analysis of AI training data poisoning covered the broader concept of manipulating AI systems for visibility. That piece focused on poisoning training datasets. This Microsoft research shows something more immediate, happening at the point of user interaction and being deployed commercially.

Looking Ahead

Microsoft acknowledged this is an evolving problem. The open-source tooling means new attempts can appear faster than any single platform can block them, and the URL parameter technique applies to most major AI assistants.

It’s unclear whether AI platforms will treat this as a policy violation with consequences, or whether it stays as a gray-area growth tactic that companies continue to use.

Hat tip to Lily Ray for flagging the Microsoft research on X, crediting @top5seo for the find.


Featured Image: elenabsl/Shutterstock

Category News Generative AI
SEJ STAFF Matt G. Southern Senior News Writer at Search Engine Journal

Matt G. Southern, Senior News Writer, has been with Search Engine Journal since 2013. With a bachelor’s degree in communications, ...