Researchers at Microsoft Defender have issued a warning regarding malicious browser extensions that pose as AI assistant tools and surreptitiously gather chat logs and browsing information from business users This article explores malicious extensions increasingly. . These malicious extensions reached about 900,000 installations, according to Microsoft telemetry and outside reports.
They were seen in over 20,000 business tenants, which could reveal private company data. Users who regularly engage with AI platforms like ChatGPT and DeepSeek were the target audience for the extensions. The malicious add-ons were able to monitor user activity and record both visited URLs and AI conversation content by integrating themselves into popular browsers like Google Chrome and Microsoft Edge. According to researchers, the information gathered may include confidential prompts entered into AI chat tools, internal workflows, proprietary source code, and business conversations.
Due to the widespread use of AI assistants by knowledge workers, compromised extensions essentially turned browsers into ongoing data collection sites. Campaign for Malicious Extensions and Data Gathering The distribution of browser extensions with an AI theme via the Chrome Web Store marked the start of the attack. By mimicking the branding and behavior of authentic extensions, like AI sidebar tools, the threat actors created extensions that closely resembled legitimate productivity tools used to interact with AI models.
An attack chain showing the progression of a malicious AI-themed Chromium extension from marketplace distribution to ongoing gathering and exfiltration of browsing telemetry and LLM chat content. (Source: Microsoft) The malicious add-ons blended in seamlessly with the expanding AI browser utility ecosystem.
After being installed, the extensions asked for extensive permissions so they could monitor user browsing behavior. The extensions executed background scripts that recorded visited URLs and recorded snippets of AI dialogue taking place on websites. Before being periodically sent to an infrastructure under the control of an attacker, the gathered data was kept locally.
Investigators discovered that the extensions sent data to domains like deepaichats[. ]com and chatsaigpt[. ]com via HTTPS POST requests. Complete URLs, browsing context, chat snippets, model names, and persistent user identifiers were among the data that was exfiltrated.
Forensic traces on impacted systems were decreased by clearing local buffers following transmission. Risk and Mitigation in Enterprises The campaign draws attention to an increasing security risk for businesses using browser-based AI tools.
The browser extension management interface shows the details page for the browser extension fnmhidmjnmklgjpcoonkmkhjpjechg. (Source: Microsoft) Malicious versions of browser extensions can covertly gather vast amounts of sensitive data because they function within typical browsing environments and frequently require extensive permissions.
Type Value Context Extension ID inhcgfpbfdjbjogdfjbclgolkmhnooop "AI Sidebar with Deepseek, ChatGPT, Claude and more" C2 Domain chatsaigpt[. ]com Primary exfiltration endpoint for stolen chat data C2 Domain deepaichats[. ]com Primary exfiltration endpoint for stolen chat data Infra Domain chataigpt[.
]pro Infrastructure and hosting of misleading privacy policies Defenders are urged to put extension inventory controls in place, activate browser security features like Microsoft Defender SmartScreen, and inform users of the dangers of installing unreliable AI productivity tools. Security experts caution that malicious extensions may increasingly target the browser as a gateway to important organizational data as enterprise adoption of AI assistants increases.












