RSAC 2026 CONFERENCE – San Francisco – A lot of people think that artificial intelligence will change the game for cybersecurity, but one researcher thinks that these new tools are actually making modern defenses weaker This article explores ai agents backdoors. . Oded Vanunu, the chief technologist at Check Point Software, talked about what he calls a "new era" of client-side attacks made possible by AI coding assistants during a session on Tuesday at the RSAC 2026 Conference in San Francisco.
The session, "When AI Agents Become Backdoors: The New Era of Client-Side Threat," showed that popular tools like Anthropic's Claude Code, OpenAI's Codex, and Google's Gemini have a number of security holes.
Vanunu tells ZeroOwl that he and his research team spent the last year looking into AI development tools and quickly realized that they were putting a lot of the cybersecurity industry's progress at risk. He says that the industry has come up with "amazing platforms and technologies" over the past ten years to better protect endpoints and move application execution to the cloud. Vanunu says that attackers can use the flaw to turn Claude Code Hooks, which are user-defined shell commands that run automatically, into weapons and get around endpoint detection and response (EDR) products.
CISOs Talk About The Role of People in AI-Powered Security A threat actor could also create a way to get around the model context protocol (MCP) consent.
Claude needs the user's permission for MCP server plug-ins to run, but Claude Code automatically reads configurations. This means that bad MCP servers can run commands in those files before the trust dialog appears. The team found a code injection flaw, CVE-2025-61260 (CVSS score pending), in OpenAI Codex CLI that could be used in similar attacks.
A hacker could use a project .env file to send the CLI to a bad local .toml configuration file.












