GitHub Advisory — LangChainJS serialization injection (CVE-2025-68665)

AI relevance: LangChainJS powers many agent backends; this deserialization flaw can turn agent metadata into secret exfiltration or unsafe object instantiation.

  • CVE-2025-68665 (GHSA-r399-636x-v7f6) affects @langchain/core and langchain package lines prior to patched releases.
  • Bug in Serializable.toJSON() failed to escape user-controlled objects containing lc keys used to mark serialized LangChain objects.
  • Injected data in metadata, additional_kwargs, or response_metadata can be treated as trusted LangChain objects during load().
  • Attackers can extract environment secrets by injecting {"lc":1,"type":"secret","id":["ENV_VAR"]} when secretsFromEnv is enabled.
  • Injected constructor structures can instantiate any class exposed in import maps with attacker-controlled parameters.
  • Patch adds escaping, sets secretsFromEnv default to false, and introduces maxDepth to curb deeply nested payloads.
  • CVSS v3.1 score: 8.6 (high).

Why it matters

  • Agent pipelines often serialize/deserialize model outputs; prompt injection can plant lc structures that later execute during load.
  • Secret extraction from environment variables is a direct path to API key theft and downstream tool abuse.
  • Import-map instantiation means attackers can trigger side effects in classes your agent trusts.

What to do

  • Upgrade to patched versions: @langchain/core 1.1.8+ (or 0.3.80+) and langchain 1.2.3+ (or 0.3.37+).
  • Avoid deserializing untrusted data; sanitize/strip user-controlled metadata fields before serialization.
  • Keep secretsFromEnv disabled unless data is fully trusted; prefer explicit secretsMap.
  • Limit import maps to trusted classes only and treat them as privileged configuration.

Sources