An attack chain with three different flaws in Anthropic's Claude AI agent could have let attackers put harmful hidden instructions in a pre-filled chat URL through a Google search, steal private user data, and show users harmful links that look like real search results This article explores oasis security flaws. . A report that came out on Wednesday said that researchers from Oasis Security found the flaws, which were each worrying on their own.

The Oasis Security Research Team says that when linked together in an attack called "Claudy Day," they "create a complete attack pipeline from targeted victim delivery to silent data exfiltration."

The team says that the attack chain starts when a potential victim searches for Claude on Google and clicks on what looks like a real search result. In reality, it's a page controlled by an attacker with a pre-filled prompt that has hidden instructions. Those instructions make the agent do things that the victim never meant to happen, like silently exfiltrating sensitive data.

This can be done without any extra tools, integrations, or model context protocol (MCP) servers. ## The Severity of an Attack Depends on Agent Access Oasis says that the severity of a potential attack depends on what the agent can get to.

In a basic Claude chat where the AI agent isn't connected to any other apps or systems, the hidden injection can get to the conversation history and memory, pull sensitive information from past chats, and send it out through the Files API. Related: Cisco SD-WAN Zero-Day Has Been Used for Three Years Oasis says that if the victim's Claude session has MCP servers, tools, or integrations turned on, the injected prompt can make the user do different things. This includes reading files, sending messages, using APIs, or talking to services that are connected.

Attackers can then steal any data they get from these activities.