CERT — CVE-2026-5752 Terrarium Sandbox Escape via Pyodide Prototype Chain Traversal
AI relevance: Terrarium is a sandbox-based code execution platform used by AI applications to let users safely test and validate code — its escape to root-level RCE undermines the trust boundary that AI code assistants and educational platforms rely on.
What happened
- CERT/CC published VU#414811 disclosing CVE-2026-5752, rated 9.3 on CVSS, a sandbox escape vulnerability in the Terrarium code execution platform.
- The root cause: the
jsglobalsobject inservice.tscreates a mockdocumentobject as a standard JavaScript object literal, which inherits properties fromObject.prototype. - This inheritance chain allows sandboxed code to traverse up to the
Functionconstructor, create a function that returnsglobalThis, and from there access Node.js internals — includingrequire(). - The result: arbitrary code execution with root privileges on the host Node.js process, fully escaping the Pyodide WebAssembly sandbox.
- An attacker can access and modify sensitive files (including
/etc/passwd), read environment variables, reach other services on the container's network, and potentially escape the container entirely. - No vendor patch is available — CERT was unable to coordinate with the vendor for a fix. Mitigations include disabling user-submitted code features, network segmentation, and deploying a WAF.
- The vulnerability was discovered by Jeremy Brown using AI-assisted vulnerability research.
- Published: April 21, 2026.
Why it matters
- Terrarium is used by AI code assistants and educational platforms to provide safe code execution — a sandbox escape directly undermines the trust model of these services.
- The exploit technique (prototype chain traversal in Pyodide/JSGlobals) is a class of bug that could affect other WASM-based sandboxed execution environments used in AI tooling.
- The absence of a vendor patch means deployed instances remain vulnerable until operators implement mitigations or switch platforms.
- The discovery via AI-assisted research highlights that attackers are increasingly using AI to find vulnerabilities in AI infrastructure.
What to do
- If you operate Terrarium instances, disable features that allow user-submitted code until a patch is available.
- Implement network segmentation to isolate sandboxed containers from sensitive internal services.
- Monitor container activity for signs of privilege escalation or lateral movement.
- If you use Pyodide or similar WASM-based sandboxes in your AI stack, audit your
jsglobalsconfiguration for prototype chain exposure.