CVE-2026-34070 — LangChain Path Traversal in Legacy Prompt Template Loading

AI relevance: LangChain's legacy prompt-loading functions read files from paths embedded in user-controlled config dicts without validating against directory traversal or absolute path injection, allowing an attacker to read arbitrary files on the host filesystem in any AI application that passes untrusted prompt configurations to these APIs.

What happened

  • CVE-2026-34070 (GHSA-qh6h-p6c9-ff54) was published for langchain-core (pip), rated High severity, affecting legacy functions load_prompt(), load_prompt_from_config(), and .save() on prompt classes.
  • The functions read files from paths embedded in deserialized config dicts — such as template_path, suffix_path, prefix_path, examples, and example_prompt_path — with no validation against .. traversal or absolute path injection.
  • Attackers can read .txt files via template paths (e.g., cloud-mounted secrets, internal system prompts) and .json/.yaml files via example paths (e.g., ~/.docker/config.json, ~/.azure/accessTokens.json, Kubernetes manifests, CI/CD configs).
  • The fix in langchain-core ≥ 1.2.22 adds path validation rejecting absolute paths and traversal sequences by default, with an allow_dangerous_paths=True escape hatch for trusted inputs.
  • LangChain formally deprecated the legacy APIs — they will be removed in 2.0.0. Developers should migrate to dumpd/dumps/load/loads from langchain_core.load, which use an allowlist-based security model and do not perform filesystem reads.

Why it matters

Path traversal is a classic web vulnerability that persists in AI frameworks when they expose file-loading primitives to untrusted input. LangChain is the most widely deployed AI orchestration library, and any application accepting user-supplied prompt configurations — particularly low-code AI builders, prompt marketplaces, and API wrappers — inherits this risk. The file-extension constraint (.txt, .json, .yaml) limits the blast radius, but these extensions cover exactly the formats used for secrets and cloud credentials in production deployments.

What to do

  • Upgrade langchain-core to ≥ 1.2.22 immediately.
  • Audit any code paths that pass user-controlled dicts to load_prompt() or load_prompt_from_config().
  • Migrate to the newer langchain_core.load serialization APIs, which use a safe allowlist model.
  • If you run a low-code AI platform accepting structured prompt configs, validate inputs against a fixed schema before passing them to any LangChain load function.

Sources