ONNX — Zero-interaction model supply-chain attack (CVE-2026-28500)
AI relevance: ONNX is a foundational interchange format for ML model distribution, and its onnx.hub.load() function is a primary vector for pulling models from registries — making a trust-verification bypass in this path a direct supply-chain risk for every downstream AI pipeline.
- CVE-2026-28500 (CVSS 8.6) affects ONNX versions up to and including 1.20.1.
- The
onnx.hub.load()function includes asilent=Trueparameter that suppresses all security warnings and confirmation prompts about untrusted model sources. - Attackers can craft a malicious ONNX model that, when loaded with
silent=True, bypasses repository trust verification entirely — no user interaction required beyond the initial load call. - When chained with file-system vulnerabilities, the attack enables silent exfiltration of SSH keys and cloud credentials from the victim's machine.
- The flaw requires no authentication and has low attack complexity.
- No patch is currently available; no public PoC has been released at time of writing.
- The vulnerability is structurally similar to prior supply-chain risks in
pip installand npm — except the "package" is a model artifact and the install hook is the.load()call.
Why it matters
- ONNX is used across the ML ecosystem (PyTorch, TensorFlow, scikit-learn export). A trust bypass in its model-loading path affects a massive surface area.
- ML practitioners routinely load models from Hugging Face, GitHub, or shared storage — often in CI/CD pipelines or training notebooks where
silent=Trueis a convenience, not a red flag. - This CVE re-emphasizes that model files are executable artifacts. Loading a model is analogous to running a package installer — trust boundaries must be enforced.
What to do
- Pin versions: Audit your environments for ONNX ≤1.20.1. Remove or restrict
onnx.hub.load()usage. - Never use
silent=True: In any production or semi-trusted context, ensure model loading always surfaces warnings. Better yet, use explicit hash verification. - Network isolation: Run model-loading code in sandboxed environments with egress filtering to detect unauthorized exfiltration attempts.
- Monitor for patch: Watch the ONNX GitHub repo for a security advisory and patched release.