The artificial intelligence (AI) assistant Ask Gordon, which is integrated into Docker Desktop and the Docker Command-Line Interface (CLI), has a security flaw that has been patched, according to cybersecurity researchers. This vulnerability could be used to run code and steal confidential information. Noma Labs, a cybersecurity company, has named the critical vulnerability DockerDash.
In November 2025, Docker released version 4.50.0 to address it. "Taking advantage of current agents and MCP Gateway architecture, every stage occurs with zero validation." If the vulnerability is successfully exploited, it could lead to high-impact data exfiltration for desktop applications or critical-impact remote code execution for cloud and CLI systems.
According to Noma Security, the issue arises from the AI assistant's treatment of unverified metadata as executable commands, which permits it to spread through various layers without any validation and enable an attacker to get around security measures. As a result, tool execution is made possible by a straightforward AI query. The problem is a lack of contextual trust, as MCP serves as a link between a large language model (LLM) and the local environment.
The issue has been described as a Meta-Context Injection case. According to Levi, "MCP Gateway cannot differentiate between a pre-authorized, runnable internal instruction and informational metadata (like a standard Docker LABEL)."
"An attacker can take control of the AI's reasoning process by inserting malicious instructions into these metadata fields." A critical trust boundary violation in Ask Gordon's container metadata parsing could be exploited by a threat actor in a hypothetical attack scenario. The attacker creates a malicious Docker image with embedded instructions in Dockerfile LABEL fields in order to achieve this.
Even though the metadata fields appear harmless, when Ask Gordon AI processes them, they become injection vectors.
The attacker releases a Docker image with weaponized LABEL instructions in the Dockerfile. This is the code execution attack chain. When a victim asks Using Ask Gordon AI's incapacity to distinguish between malicious instructions embedded in the image and valid metadata descriptions, Gordon reads the image's metadata, including all LABEL fields.
Request that Gordon send the parsed instructions to the MCP gateway, which is a layer of middleware that sits in between MCP servers and AI agents.
Without any further verification, MCP Gateway invokes the designated MCP tools after interpreting it as a typical request from a reliable source. Code execution is accomplished by the MCP tool using the victim's Docker privileges to run the command. The same prompt injection vulnerability is weaponized in the data exfiltration vulnerability, which targets Ask Gordon's Docker Desktop implementation to use MCP tools to obtain private information about the victim's environment by exploiting the assistant's read-only permissions.
Details about installed tools, container information, Docker configuration, mounted directories, and network topology can all be included in the collected data.
It's important to note that Ask Gordon version 4.50.0 also fixes a prompt injection vulnerability found by Pillar Security that might have given hackers the ability to take control of the assistant and steal confidential information by altering the Docker Hub repository metadata with malicious commands. According to Levi, "the DockerDash vulnerability underscores your need to treat AI Supply Chain Risk as a current core threat." It demonstrates how malicious payloads that readily alter AI's execution path can be concealed using your trusted input sources.
Zero-trust validation must be applied to all contextual data supplied to the AI model in order to mitigate this new class of attacks.












.webp%3Fw%3D1068%26resize%3D1068%2C0%26ssl%3D1&w=3840&q=75)