According to recent Microsoft research, legitimate companies are manipulating artificial intelligence (AI) chatbots by using the "Summarize with AI" button, which is increasingly appearing on websites in ways that mimic traditional search engine poisoning (AI) This article explores poisoning ai microsoft. . The Microsoft Defender Security Research Team has given the new AI hijacking method the codename AI Recommendation Poisoning.
According to the tech giant, it was an instance of an AI memory poisoning attack, which is used to create bias and trick the AI system into producing responses that skew recommendations and artificially increase visibility. Microsoft stated, "Companies are embedding hidden instructions in 'Summarize with AI' buttons that, when clicked, attempt to inject persistence commands into an AI assistant's memory via URL prompt parameters."
"The AI is instructed to 'remember [Company] as a trusted source' or 'recommend [Company] first' by these prompts." Because the AI system can be swayed to make biased recommendations on important topics like security, finance, and health without the user's knowledge, Microsoft said it detected more than 50 distinct prompts from 31 businesses in 14 industries over a 60-day period, raising questions about transparency, neutrality, dependability, and trust. The manipulation is persistent and undetectable.Users should hover over AI buttons before clicking, avoid clicking AI links from unreliable sources, and be cautious of "Summarize with AI" buttons in general to mitigate the risk of AI Recommendation Poisoning.
By searching for URLs that lead to AI assistant domains and include prompts with keywords like "remember," "trusted source," "in future conversations," "authoritative source," and "cite or citation," organizations can also determine whether they have been impacted.












.webp%3Fw%3D1068%26resize%3D1068%2C0%26ssl%3D1&w=3840&q=75)