By inserting malicious Copilot instructions into a GitHub issue, bad actors could have taken advantage of a vulnerability in GitHub Codespaces to take over repositories. The artificial intelligence (AI)-driven vulnerability has been codenamed RoguePilot by Orca Security. After responsible disclosure, Microsoft has since patched it.
According to a report by security researcher Roi Nisimi, "Attackers can craft hidden instructions inside a GitHub issue that are automatically processed by GitHub Copilot, giving them silent control of the in-codespaces AI agent." The vulnerability has been characterized as an instance of passive or indirect prompt injection, in which a malicious instruction is inserted into data or content that the large language model (LLM) processes, causing it to generate unexpected outputs or perform arbitrary actions.
According to recent research, models backdoored at the computational graph level—a technique known as ShadowLogic—can further jeopardize agentic AI systems by enabling tool calls to be silently changed without the user's knowledge. HiddenLayer has codenamed this new phenomenon Agentic ShadowLogic. Such a backdoor could be used as a weapon by an attacker to intercept real-time requests to retrieve content from a URL and route them through infrastructure under their control before being sent to the intended location.
According to the AI security firm, "the attacker can map which internal endpoints exist, when they're accessed, and what data flows through them by logging requests over time." "The user receives the expected data without any warnings or errors.












