CVE-2026-44843 — LangChain Unsafe Deserialization via Overly Broad load() Allowlists
AI relevance: LangChain is the most widely used framework for building AI agent pipelines, and unsafe deserialization in its core library can let attackers inject malicious serialized objects through untrusted run inputs, potentially escalating to arbitrary code execution in agent applications that accept user-controlled structured data.
What happened
- CVE-2026-44843 was published for langchain-core (pip), rated High severity, affecting older runtime code paths that deserialize run inputs, outputs, or application-controlled payloads.
- The root cause is overly broad object allowlists in
load()calls — specificallyallowed_objects="all"— which permit any trusted LangChain-serializable object to be revived, broader than these paths require. - Attacker-supplied LangChain serialized constructor dictionaries can cause trusted runtime paths to instantiate classes with untrusted constructor arguments.
- Known affected API surfaces include
RunnableWithMessageHistory,astream_log(), andastream_events(version="v1"). - The fix also addresses a related secret-marker validation bypass in the serialization layer (
_is_lc_secret) that allowed attacker-controlled constructor dictionaries to bypass escaping duringdumps()→loads()round-trips. - Applications are only exposed when they accept untrusted structured input (e.g. JSON), fail to validate it into an inert schema before invoking LangChain, and use an affected API path that later deserializes that data.
Why it matters
LangChain underpins a massive number of production AI agent applications, RAG pipelines, and tool-use systems. Unsafe deserialization is a classic attack primitive that, in the AI agent context, can lead to object injection attacks that manipulate agent behavior, exfiltrate secrets embedded in serialized prompts, or escalate to code execution when deserialized objects trigger side effects. The breadth of affected API surfaces — including message history and streaming endpoints — means many common agent patterns may be exposed.
What to do
- Upgrade langchain-core to the patched version immediately.
- Audit your application's input handling: ensure all user-facing structured input is validated against a fixed schema and coerced to inert types before reaching LangChain API calls.
- Review any use of shared prompt stores, Hub artifacts with model configuration, or serialization stores that load LangChain objects from untrusted sources.
- Check for use of
RunnableWithMessageHistory,astream_log(), orastream_events(v1)with user-controlled input — these are confirmed affected surfaces.