Claude Vulnerabilities Steal Private Information and Send Users to Bad Websites Three linked flaws in Claude.ai, Anthropic's popular AI assistant, let attackers quietly steal private conversation data and send unsuspecting users to harmful websites, all without needing any integrations, tools, or MCP server settings. The vulnerability chain, which is known as Claudy Day, was responsibly reported to Anthropic through its Responsible Disclosure Program. The main prompt injection flaw has since been fixed.
The attack takes advantage of three separate flaws in the claude.com platform and links them together to make a full end-to-end compromise pipeline. Three linked weaknesses Invisible Prompt Injection via URL Parameters: Claude.ai lets users or third parties open a chat session with pre-loaded text by using URL parameters (claude.ai/new?q=...).
Researchers found that some HTML tags could be put inside this parameter and made invisible in the chat input field. However, they would still be fully processed by Claude when the form was submitted. This information comes after Oasis Security's earlier research into OpenClaw, which fits with a pattern that is becoming more and more clear: A single manipulated input can take control of AI agents with broad access, and old identity and access management frameworks weren't made to handle agentic behavior on a large scale.
Follow us on Twitter, LinkedIn, and X for daily cybersecurity updates. Get in touch with us to have your stories featured.












