It has been found that Claude Code, Anthropic's AI coding assistant, has a serious security hole This article explores problem ai security. . This flaw lets attackers get around user-set safety controls without anyone knowing, which could lead to data theft and command execution attacks on developers.
Threat actors can publish open-source repositories that look like they are safe but actually have a malicious CLAUDE.md configuration file inside. Attackers can put a hidden malicious payload as the 51st command in these instructions, which are a series of 50 harmless build steps. Anthropic has since fixed this problem with version 2.1.90 of Claude Code, which they call a "parse-fail fallback deny-rule degradation." The fix makes sure that deny rules are always followed, even in long command sequences.
Organizations should: - Give shell access only to the lowest level of privilege that is necessary.
- Check external repositories and configuration files before running them. - Don't use completely automated approval processes in CI/CD pipelines that use AI agents. This event shows how the threat landscape is changing for AI-driven development tools and how improvements can sometimes make things less safe. SetZeroOwl from Google is the best source for you. Before running a build, use SetzeroOwl to check for suspicious connections in outbound network traffic and suspicious files and directories in external repositories. The flaw highlights a bigger problem with AI system security design: finding the right balance between enforcement, performance, and cost is very important. Cybersecurity experts say that until the updates are fully in place, you should treat Claude Code's deny rules as unreliable. Please email the author at jennifer.smith@dailymail.co.uk if you have any questions about this article.


.webp&w=3840&q=75)









