Threat actors take advantage of the recent leak of Anthropic Claude code This article explores malware ai claude. . Recent developers looking for the leaked AI agent architecture have accidentally downloaded dangerous payloads by using fake GitHub repositories to spread Vidar and GhostSocks malware.
Organizations need to stay on the lookout as they keep looking for the leaks. Security experts say you should not download, build, or run any code from unofficial GitHub repositories that say they have the leaked Claude Code. Security staff should keep a close eye on developers' workstations for any suspicious outgoing network traffic and only use validated binaries from Anthropic. Add ZeroOwl to your favorite Google software repositories.
Learn more.
Cybersecurity Classes and Anti-Phishing Solutions Finding malware with AI The Claude Code Leak On March 31, 2026, Anthropic accidentally made the full source code for Claude Code, its AI coding assistant for terminals, public. A 59.8 MB JavaScript source map file was accidentally added to the public npm package @anthropic-ai/claude-code, which caused the leak. This file had more than 513,000 lines of TypeScript that were not obfuscated.
It showed the agent's internal orchestration, execution layer, and hidden feature flags. Researchers found out about persistent memory systems, autonomous daemons that run in the background, and ways for processes to talk to each other. The exposure of more than 20 unimplemented feature flags gave hackers access to Anthropic's internal API design and security telemetry like never before. There is a "Download ZIP" button on the same GitHub repository that has the Claude Code leak.
This event happened at the same time as another npm supply chain attack, which made it much more dangerous for developers to update their packages.




.webp%3Fw%3D1068%26resize%3D1068%2C0%26ssl%3D1&w=3840&q=75)

.webp&w=3840&q=75)


_Blackboard_Alamy.jpg%3Fwidth%3D1280%26auto%3Dwebp%26quality%3D80%26format%3Djpg%26disable%3Dupscale&w=3840&q=75)

