Noma Security — DockerDash: Prompt Injection in Docker Ask Gordon AI Enables RCE via Image Metadata
AI relevance: DockerDash is a textbook MCP-mediated prompt injection — a malicious Docker image label hijacks an AI assistant reasoning chain, which blindly forwards attacker instructions to the MCP Gateway for tool execution, demonstrating the real-world danger of unvalidated context in agent tool-calling architectures.
- Noma Security disclosed "DockerDash", a critical vulnerability in Ask Gordon, Docker Desktop and CLI built-in AI assistant, patched in Docker Desktop v4.50.0 (November 2025) but details published February 2026.
- Attack vector: An attacker publishes a Docker image with weaponized instructions embedded in Dockerfile
LABELfields. When a victim queries Ask Gordon about the image, Gordon reads the metadata, interprets the malicious instruction, and forwards it to the MCP Gateway. - Zero validation at every layer: The MCP Gateway cannot distinguish between informational metadata and a pre-authorized runnable instruction. It interprets the forwarded request as coming from a trusted source and invokes MCP tools without additional checks.
- Noma classifies this as "Meta-Context Injection" — a failure of contextual trust where unverified metadata becomes executable commands as it propagates through AI reasoning layers.
- Two attack paths:
- RCE (Cloud/CLI): Multi-step command sequences embedded in image labels are executed by MCP tools with the victim's Docker privileges.
- Data exfiltration (Desktop): The assistant's read-only permissions are weaponized to capture build metadata, container details, API keys, network topology, and Docker configuration via MCP tool wrappers.
- Pillar Security independently confirmed the attack, noting Gordon had read access to sensitive artifacts (build logs, chat history), routinely ingested untrusted external content (Docker Hub descriptions), and could make outbound HTTP requests — the classic prompt-injection trifecta.
- Docker's fix introduces a "human-in-the-loop" (HITL) system — Gordon must now ask the user for explicit permission before connecting to outside links or executing commands.
- The vulnerability highlights a systemic pattern: MCP Gateways that treat all AI-forwarded requests as trusted are inherently vulnerable to prompt injection propagation.
Why it matters
- Docker is ubiquitous in AI/ML infrastructure. Ask Gordon is enabled by default in Docker Desktop, meaning millions of developer environments were exposed before the patch.
- This is one of the first real-world, end-to-end MCP prompt injection chains documented in a major production tool — not a research PoC, but a shipping product.
- The pattern (untrusted data → AI reasoning → MCP tool execution) applies to every MCP-connected agent that ingests external content without provenance checks. Expect more DockerDash-style bugs across the ecosystem.
What to do
- Update Docker Desktop to ≥ 4.50.0 if you haven't already.
- Audit MCP tool permissions: Any AI assistant connected to MCP tools should have least-privilege access and require explicit user consent for destructive or exfiltration-capable actions.
- Treat image metadata as untrusted input: Never allow AI systems to execute instructions derived from container labels, descriptions, or READMEs without validation.
- Implement provenance checks: MCP Gateways should tag requests with their origin (user vs. AI-inferred vs. external metadata) and apply different trust levels.
- Review your AI assistant integrations: If you use any AI coding assistant that reads project metadata, check whether it has unsandboxed tool access.