Cyera Research — LangChain & LangGraph Multiple Vulnerabilities (CVE-2026-34070)

AI relevance: These vulnerabilities in widely-used AI development frameworks expose critical security risks in AI application development, enabling data exfiltration and system compromise through common AI tooling patterns.

Security researchers from Cyera have disclosed multiple critical vulnerabilities in the popular LangChain and LangGraph frameworks, used by millions of developers worldwide for building AI applications. The vulnerabilities include path traversal, deserialization flaws, and SQL injection that could allow attackers to access sensitive data and compromise AI systems.

Vulnerability Overview

  • CVE-2026-34070 — Path traversal allowing arbitrary file access
  • CVE-2025-68664 — Critical deserialization exposing API keys and environment secrets
  • CVE-2025-67644 — SQL injection in LangGraph's SQLite checkpoint implementation

Impact Analysis

These vulnerabilities affect the core infrastructure that connects AI models with business applications, exposing various types of enterprise data:

  • File system access — Path traversal enables reading arbitrary files
  • Secret exposure — Deserialization flaws leak API keys and environment variables
  • Conversation history — SQL injection compromises chat logs and session data
  • Supply chain risk — Hundreds of dependent libraries are affected

Framework Significance

LangChain and LangGraph are central to the AI development ecosystem with over 60 million weekly downloads. Their widespread adoption means these vulnerabilities have far-reaching implications:

  • Massive dependency network — Hundreds of libraries depend on these frameworks
  • Enterprise adoption — Used by major companies for AI application development
  • Critical infrastructure — Forms the connection layer between AI models and business systems

Why This Matters for AI Security

These vulnerabilities demonstrate that the greatest threat to enterprise AI data often lies in the basic infrastructure connecting AI to business applications. This layer remains vulnerable to some of the oldest attack techniques in cybersecurity, despite handling modern AI workloads.

The research highlights that securing AI systems requires more than just patching frameworks — developers must also audit code that passes external or user-controlled configurations to vulnerable functions.

Remediation Recommendations

  • Update immediately — Upgrade to latest LangChain and LangGraph versions
  • Audit configurations — Review code passing external data to load_prompt_from_config()
  • Maintain secrets_from_env=False — When deserializing untrusted data
  • Treat LLM outputs as untrusted — Language model outputs should be considered potentially malicious inputs
  • Implement defense in depth — Layer technical controls, monitoring, and access restrictions

Sources & Further Reading