Hugging Face LeRobot — Critical Pickle Deserialization RCE (CVE-2026-25874)

AI relevance: LeRobot is Hugging Face's open-source robotics ML framework used for embodied AI training and GPU-backed inference — the PolicyServer component runs as a network-facing gRPC service that, when exposed, allows unauthenticated RCE via Python pickle deserialization.

  • CVE-2026-25874 — Critical unauthenticated RCE (CVSS 9.3–9.8) in Hugging Face LeRobot
  • Root cause — The gRPC PolicyServer uses pickle.loads() to deserialize incoming protobuf byte streams on RPC handlers like SendPolicyInstructions and SendObservations
  • Exploitation path — Malicious pickle payloads execute arbitrary code during deserialization, before any isinstance() validation checks are applied
  • Network exposure — The gRPC server uses add_insecure_port() by default, meaning no TLS and no authentication; production deployments commonly bind to 0.0.0.0 to reach external GPU servers
  • #nosec suppression — Affected code sections contained #nosec comments explicitly suppressing security linter warnings, suggesting developers were aware of the deserialization risk but bypassed safeguards
  • Scale — LeRobot has ~24,000 GitHub stars and is used in research and production robotics deployments worldwide
  • Disclosure — Researched and disclosed by security researcher chocapikk on April 28, 2026
  • Remediation — Replace pickle with safetensors or native protobuf fields, switch to add_secure_port() with TLS, and enforce gRPC interceptor-based authentication

Why it matters

This is a textbook example of the ML ecosystem's recurring deserialization problem: Hugging Face itself created safetensors specifically to address pickle-based risks in model serialization, yet LeRobot — one of its own flagship projects — still ships with pickle.loads() handling untrusted network input. The #nosec comments in the affected code show this wasn't an oversight but a conscious decision to suppress known warnings.

The attack surface is particularly dangerous for AI deployments because LeRobot's PolicyServer is designed to run on GPU-backed infrastructure, often exposed across distributed environments. Any internet-reachable instance can be fully compromised with a single crafted gRPC call, giving attackers access to the underlying GPU host, model weights, and any connected robotics systems.

This follows a pattern we've seen repeatedly: LMDeploy's CVE-2026-33626 SSRF exploited within 13 hours, and Marimo's CVE-2026-39987 RCE exploited within 10 hours. ML inference servers are being actively hunted and weaponized within hours of disclosure.

What to do

  • Inventory all LeRobot deployments and verify whether the gRPC PolicyServer is bound to 0.0.0.0 or any externally-reachable interface
  • Replace pickle.loads() with Hugging Face safetensors, JSON, or native protobuf fields for all incoming data
  • Switch from add_insecure_port() to add_secure_port() with TLS and require token-based authentication via gRPC interceptors
  • Never deploy ML inference servers with direct internet exposure — use network segmentation and zero-trust controls
  • Remove all #nosec suppressions from serialization code paths and treat linter warnings as hard blocks

Sources