Anthropic's Claude Code, an AI-powered coding assistant, has several security flaws that could lead to remote code execution and API credential theft, according to cybersecurity researchers This article explores vulnerability claude code. . The vulnerabilities found fall into three main categories: No CVE (CVSS score: 8.7); a code injection vulnerability resulting from a user consent bypass when launching Claude Code in a new directory, which could lead to arbitrary code execution without further confirmation via untrusted project hooks defined in.claude/settings.json.

(Fixed in September 2025 in version 1.0.87) When a user launches Claude Code in an untrusted directory, a code injection vulnerability known as CVE-2025-59536 (CVSS score: 8.7) permits the automatic execution of arbitrary shell commands upon tool initialization.

(Fixed in October 2025 in version 1.0.111) A vulnerability in Claude Code's project-load flow that permits a malicious repository to exfiltrate data, including Anthropic API keys, is known as CVE-2026-21852 (CVSS score: 5.3). (Repaired in January 2026 in version 2.0.65) Anthropic stated in an advisory for CVE-2026-21852 that "Claude Code would issue API requests before showing the trust prompt, including potentially leaking the user's API keys, if a user started Claude Code in an attacker-controller repository and the repository included a settings file that set ANTHROPIC_BASE_URL to an attacker-controlled endpoint." To put it another way, all it takes to exfiltrate a developer's active API key, reroute authenticated API traffic to external infrastructure, and obtain credentials is to open a crafted repository.

The attacker may then be able to penetrate further into the victim's AI infrastructure as a result. If the first vulnerability is successfully exploited, it could cause stealthy execution on a developer's computer without requiring any further action beyond starting the project. The goal of CVE-2025-59536 is similar as well; the primary distinction is that repository-defined configurations are defined using claude/settings and.mcp.json.An attacker could use a JSON file to circumvent explicit user consent before using the Model Context Protocol (MCP) to interact with external tools and services.

Setting the "enableAllProjectMcpServers" option to true accomplishes this.Configuration files essentially become a part of the execution layer as AI-powered tools become capable of carrying out commands, starting external integrations, and starting network communication on their own, according to Check Point.

"System behavior is now directly influenced by what was once thought of as operational context." "The threat model is essentially changed by this. The risk now includes opening untrusted projects in addition to running untrusted code.

The supply chain in AI-driven development environments starts with the automation layers that surround the source code.