banks CVE-2026-44209 — Jinja2 SSTI in Prompt Template Library Leads to RCE

AI relevance: banks is a Python prompt template library used in LLM applications; its use of unsandboxed Jinja2 means any app that passes user-controlled input to Prompt() templates is one SSTI away from remote code execution on the model host.

  • CVE-2026-44209 affects banks versions ≤ 2.4.1, a PyPI prompt template library for LLM applications.
  • The library initializes a global jinja2.Environment() without sandboxing in src/banks/env.py. Applications passing user-supplied strings as the template argument to Prompt() are vulnerable to Server-Side Template Injection (SSTI).
  • A single Jinja2 payload like {{ self.__init__.__globals__.__builtins__.__import__('os').popen('id').read() }} executes arbitrary OS commands on the host.
  • The PoC confirms both command execution and arbitrary file write on the host system.
  • This is a banks configuration flaw, not a Jinja2 bug — the fix switches to jinja2.sandbox.SandboxedEnvironment.
  • This is the third identical root-cause pattern found in AI/LLM template libraries: CVE-2024-41950 (Haystack, CVSS 7.5) and CVE-2025-25362 (spacy-llm) share the same unsandboxed Jinja2 pattern.

Why it matters

Prompt template libraries sit directly between user input and LLM execution. When they render templates through an unsandboxed engine, attackers can pivot from prompt manipulation to full host RCE. This pattern has now been caught in three separate AI framework libraries, suggesting a systemic design oversight in the Python LLM tooling ecosystem.

What to do

  • Upgrade to banks 2.4.2+ immediately.
  • Audit any LLM application that passes user-controlled template strings to prompt renderers — check for unsandboxed Jinja2, Mako, or similar engines.
  • If building prompt templating, default to sandboxed environments and never render untrusted input as template source.

Sources