Security researchers have revealed a serious multi-stage attack chain that affects Anthropic's Claude.ai platform This article explores redirect vulnerability anthropic. . This shows how attackers can quietly steal private user data and send victims to dangerous sites.

The "Claudy Day" vulnerability sequence shows that AI-driven environments are becoming more dangerous because prompt manipulation can be used as a weapon without the need for outside tools or integrations. The attack only works in a default Claude session, which makes it very dangerous. Researchers found three separate but related flaws that, when linked together, allow full exploitation: an invisible prompt injection issue, a way to steal data, and an open redirect vulnerability on Anthropic's main domain. After responsible disclosure, Anthropic has already fixed the prompt injection flaw.

But work is still going on to fix the other problems, so there is still a chance of exposure depending on how the system is set up. Breakdown of the Attack Chain The first step is to take advantage of an open redirect flaw on the claude.com domain. This flaw is abused by threat actors through Google Ads, which checks links based on trusted hostnames.

Because of this, bad links can look like they are real in paid search results. When people click on these links, they are sent to a carefully made injection URL without any warning. This URL uses Claude's ability to fill in prompts ahead of time. Attackers put harmful code in hidden HTML tags that are part of the URL parameters.

The victim can't see these instructions, but the AI system processes them when the session starts.

The injected prompt tells Claude to look through the user's chat history and find private information once it is run. This could include personal conversations, medical information, financial data, or private business information. Method for stealing data The second stage is all about exfiltration.

The bad prompt has an API key that the attacker controls, which lets Claude upload the stolen data directly to the attacker's Anthropic Files API account. This method works well to get around traditional outbound network monitoring because the data transfer happens within normal platform workflows. This method doesn't depend on endpoint compromise or suspicious binaries like other types of malware do. Instead, it misuses trusted AI features, making it much harder for security teams to find it.

When Claude is connected to other systems, the effect is much worse.

The injected prompt can get to environments that users have connected to Model Context Protocol (MCP) servers, third-party APIs, or internal enterprise resources. In these situations, the AI agent might read private company files, ask questions of internal systems, or talk to connected services without the user's knowledge. This makes the AI an insider threat that is allowed to do what it does.

Researchers say that attackers can make their campaigns even better by using targeted ads to send exploit links to specific industries, organizations, or groups of users. The "Claudy Day" attack chain shows how the threat landscape has changed in a big way. AI platforms are no longer just tools; they are now active agents that can work with sensitive data and systems.

Because of this, prompt injection vulnerabilities should be treated as seriously as other code execution flaws. Researchers at Oasis Security say that AI agents should be controlled in the same way as human users or service accounts, with strict rules about who can see and use them. Companies should check all of their AI integrations, turn off any MCP connections that aren't needed, and make sure that APIs and data sources only give access to the people who need it.

User awareness is also very important. Employees should learn how to spot the dangers of shared links and pre-filled AI prompts, which are becoming more and more common ways for hackers to get into systems. As more people use AI, proactive monitoring, intent validation, and strong access management will be necessary to stop silent data breaches and keep people's trust in smart systems. Make Google your main source for ZeroOwl.