Zafran — ChainLeak: Chainlit AI Framework Bugs Enable Cloud Takeover

  • Zafran Labs discovered two critical vulnerabilities in Chainlit, a widely-used open-source framework for building conversational AI applications, dubbed "ChainLeak".
  • CVE-2026-22218 (Arbitrary File Read): Exploits how Chainlit handles message elements (file/image attachments). An attacker can read arbitrary files from the server, including .env files containing cloud API keys, database credentials, and user data.
  • CVE-2026-22219 (SSRF): A server-side request forgery flaw lets attackers make the Chainlit server issue requests to internal services—including cloud metadata endpoints (AWS IMDSv1, GCP metadata)—to steal IAM credentials and pivot deeper into the infrastructure.
  • Both vulnerabilities require zero user interaction and can be triggered against internet-facing deployments.
  • Zafran confirmed exploitation against real-world, internet-facing AI applications operated by major enterprises.
  • Chainlit serves as the UI/frontend layer in many AI application stacks, sitting in front of LangChain, LlamaIndex, or custom orchestration. A compromise here leaks everything the backend can reach.
  • The discovery launches Zafran's "Project DarkSide"—an ongoing initiative to audit the building blocks of AI applications for well-known vulnerability classes embedded in new AI infrastructure.
  • The flaws illustrate a recurring pattern: classic web vulnerabilities (file read, SSRF) reappearing in AI frameworks that were built for rapid prototyping and often deployed with insecure defaults.

Why it matters

  • Chainlit is a popular choice for enterprise AI chatbot UIs. Any deployment exposed to the internet with default configs is at risk of full cloud credential theft.
  • AI application stacks are multi-layered (UI → Agents → Orchestration → LLM), and a vulnerability in one layer often cascades into full environment compromise due to shared credential contexts.
  • These are not novel attack classes—they are textbook web security bugs hiding in AI wrappers, which means defenders already know how to fix them if they know to look.

What to do

  • Patch Chainlit immediately: Upgrade to the latest version that addresses both CVEs.
  • Audit AI framework deployments: Check whether your Chainlit instances (or similar AI UIs) are exposed to the internet.
  • Enforce IMDSv2: On AWS, disable IMDSv1 to block the most common SSRF-to-credential-theft path.
  • Rotate exposed credentials: If you've run an internet-facing Chainlit instance, assume keys in .env and cloud metadata may have been accessed.
  • Network segmentation: Limit the internal services reachable from AI frontend containers.

Sources