Claude Code Weaknesses Anthropic's Claude Code has a serious security flaw that shows how threat actors can use repository configuration files to run malicious code and steal private API keys This article explores code security. . The vulnerabilities, identified as CVE-2025-59536 and CVE-2026-21852, demonstrate how the threat landscape for software supply chains is changing significantly as AI tools are incorporated into enterprise development processes.
By weaponizing Claude Code's project-level configuration files, Check Point Research's vulnerabilities enabled attackers to get around built-in trust controls. These files were discovered to operate as an active execution layer, despite the fact that they were generally thought of as innocuous metadata meant to facilitate collaboration. When a developer cloned and opened a malicious repository, built-in automation features like Hooks and Model Context Protocol (MCP) integrations could be manipulated to trigger unauthorized actions.
CVSS v3.1 Score Attack Vector CVE-2025-59536 CVE ID Description Bypassing user consent permits the execution of unauthorized actions prior to approval. 8.8 (Excellent) AV:N/AC:L/PR:N/UI:R/S:U/C:H/I:H/A:L CVE-2026-21852 API key theft via traffic redirection before trust validation. 9.1 (Critical) AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:L Check Point revealed that simply launching the tool within an untrusted project directory was enough to initiate silent command execution on the developer’s endpoint, bypassing explicit user consent.
This effectively inverted the security model, shifting control from the user to the repository’s configuration before trust was established. One of the most concerning aspects of the research was the potential for API credential theft. By manipulating repository-controlled settings, attackers could redirect authenticated API traffic, including the full authorization header, to an attacker-controlled server.
Before the user verified trust in the project directory, this exfiltration took place. The Workspaces feature of the platform makes Anthropic API key theft a serious enterprise risk. Multiple API keys can share access to project files stored in the cloud through workspaces.
An attacker could gain unauthorized access to shared resources with just one compromised key, which would allow them to upload, edit, or remove malicious content and create unauthorized API fees. Before making these vulnerabilities public, Anthropic and Check Point Research worked together to address them. Fixes have been made by Anthropic to improve user trust prompts, stop external tool execution without express consent, and stop API communications until trust is established. These results highlight an important development in the AI supply chain threat model.
Repository configuration files can no longer be regarded as passive settings as more of the development process is automated by agentic AI tools. They now have an impact on permissions, networking, and execution, so the risk goes beyond simply opening an untrusted project to include running untrusted code. To handle the fuzziness of trust boundaries brought about by AI-driven automation, organizations need to update their security controls.
For daily cybersecurity updates, check out LinkedIn and X. To have your stories featured, get in touch with us.












