GitHub Advisory — LangChainJS serialization injection (CVE-2025-68665)
AI relevance: LangChainJS powers many agent backends; this deserialization flaw can turn agent metadata into secret exfiltration or unsafe object instantiation.
- CVE-2025-68665 (GHSA-r399-636x-v7f6) affects
@langchain/coreandlangchainpackage lines prior to patched releases. - Bug in
Serializable.toJSON()failed to escape user-controlled objects containinglckeys used to mark serialized LangChain objects. - Injected data in
metadata,additional_kwargs, orresponse_metadatacan be treated as trusted LangChain objects duringload(). - Attackers can extract environment secrets by injecting
{"lc":1,"type":"secret","id":["ENV_VAR"]}whensecretsFromEnvis enabled. - Injected constructor structures can instantiate any class exposed in import maps with attacker-controlled parameters.
- Patch adds escaping, sets
secretsFromEnvdefault tofalse, and introducesmaxDepthto curb deeply nested payloads. - CVSS v3.1 score: 8.6 (high).
Why it matters
- Agent pipelines often serialize/deserialize model outputs; prompt injection can plant
lcstructures that later execute during load. - Secret extraction from environment variables is a direct path to API key theft and downstream tool abuse.
- Import-map instantiation means attackers can trigger side effects in classes your agent trusts.
What to do
- Upgrade to patched versions:
@langchain/core1.1.8+ (or 0.3.80+) andlangchain1.2.3+ (or 0.3.37+). - Avoid deserializing untrusted data; sanitize/strip user-controlled metadata fields before serialization.
- Keep
secretsFromEnvdisabled unless data is fully trusted; prefer explicitsecretsMap. - Limit import maps to trusted classes only and treat them as privileged configuration.