AI assistants like ChatGPT, Claude, Grok, and Microsoft 365 Copilot can be tricked into displaying planted recommendations in ways that mimic traditional search engine poisoning, according to recent Microsoft research. However, in contrast to conventional SEO manipulation carried out by cybercriminals, the organizations attempting these strategies thus far seem to be respectable companies operating in the marketing, legal, healthcare, and food industries. The implications for businesses could be significant, ranging from impact on brand visibility and unfair competitive advantages to erosion of trust in AI-driven recommendations that customers increasingly rely on for purchases and decision-making.

AI Suggestion The poisoning The strategy is referred to by Microsoft as "AI recommendation poisoning."

According to the company, it operates by "embedding hidden instructions in 'Summarize with AI' buttons that, when clicked, attempt to inject persistence commands into an AI assistant's memory via URL prompt parameters." Because AI agents are currently used in the environments of about 80% of Fortune 500 companies, organizations without safeguards against this type of AI recommendation poisoning face a serious threat. Related: Security Issues Affect the Agentic AI Website "Moltbook" The danger of AI memory poisoning is not new.

Bad actors and others can insert unwanted and malicious instructions into an AI agent's memory in a number of ways, such as by embedding prompts in emails and documents or by using malicious links and social engineering, as Microsoft pointed out in its reports.

The use of "Summarize With AI" buttons that appear harmless to conceal the harmful prompts is novel. ## Changing the Mileage at the Time AI recommendation poisoning can vary in its efficacy and persistence.