Anthropic's AI-powered coding tool, Claude Code, had three serious security flaws that made it possible for developers to take over a machine and steal credentials just by opening a project repository This article explores vulnerabilities claude code. . After Check Point Research found the defects and informed Anthropic of them last year, Anthropic resolved the problems.

Anthropic wants developers to use the most recent version of Claude Code to stay safe while they implement more security features to harden the coding platform. ## New Exposures Check Point researchers Aviv Donenfeld and Oded Vanunu wrote in a blog post this week that "these vulnerabilities in Claude Code highlight a critical challenge in modern development tools: balancing powerful automation features with security."

"A single malicious commit could compromise any developer working with the affected repository, creating serious supply chain risks due to the ability to execute arbitrary commands through repository-controlled configuration files." Related: Hackers Check for React2Shell Exposure Using a New Tool Two of the vulnerabilities, which involve configuration files in a project repository executing commands without the required user consent, are closely related. Similar to the other vulnerabilities, Check Point researchers discovered that they could intercept API-related communications between Claude Code and Anthropic's servers, reroute them to an attacker-controlled server, and log the API key before the user even saw a warning dialog by altering a setting in a project's configuration file.

Related: AI Hacking Lessons: Every Layer, Every Model Is Dangerous According to Donenfeld and Vanunu, "the incorporation of AI into development workflows brings tremendous productivity benefits but also introduces new attack surfaces that weren't present in traditional tools." "Active execution paths are now controlled by configuration files that were previously passive data.