Provos.org — IronCurtain agent sandbox architecture

AI relevance: IronCurtain proposes a hardened execution model for AI agents that routes all tool calls through an MCP proxy with deterministic policy enforcement, reducing prompt‑injection‑to‑action risk in agent deployments.

  • IronCurtain introduces a single chokepoint where every agent action is mediated by an MCP proxy that can allow, deny, or require human approval.
  • The design supports two execution modes: Code Mode (TypeScript snippets in a V8 isolate with no filesystem, network, or environment access) and Docker Mode (a full agent in a container with --network=none).
  • In Docker Mode, the agent only reaches the outside world through a Unix socket to the MCP proxy and a TLS‑terminating MITM proxy for LLM API calls.
  • The MITM proxy swaps a fake API key for the real key on outbound requests so secrets never enter the agent container.
  • Policy is authored in plain English, then compiled into deterministic rules that the trusted process enforces on every MCP call.
  • The approach is explicitly inspired by Cloudflare’s Code Mode for MCP, which collapses tool usage into typed code executed inside a sandboxed runtime.

Why it matters

  • Agent frameworks often default to all‑or‑nothing permissions; a chokepoint proxy enables granular, auditable control over tool execution.
  • Separating credentials from the agent runtime limits blast radius when prompt injection or tool abuse occurs.
  • Running agents in isolated V8 or container sandboxes provides a practical blueprint for safer personal and enterprise agent deployments.

What to do

  • Adopt a proxy model: centralize all tool calls through a policy‑enforcing gateway instead of letting agents call tools directly.
  • Isolate execution: use V8 isolates for code‑only tasks and containers with no network for full‑agent workflows.
  • Keep secrets out of the agent: broker API keys via a trusted proxy so the agent never handles raw credentials.
  • Make policy deterministic: move approvals and rules outside the model; don’t rely on LLM self‑governance.

Sources