BSI Advisory — vLLM Hardcoded trust_remote_code Bypasses User Security (CVE-2026-27893)
AI relevance: This vulnerability in vLLM, a critical AI inference infrastructure component, undermines security guarantees for deployed LLM systems and highlights supply chain risks in AI tooling dependencies.
The German Federal Office for Information Security (BSI) has issued a critical security warning for vLLM (CVE-2026-27893), revealing that hardcoded trust_remote_code=True values in two model implementation files bypass user security settings, enabling remote code execution even when users explicitly set --trust-remote-code=False.
Key Vulnerability Details
- CVE-2026-27893 — CVSS Base Score: 8.8 (High Risk)
- Remote attack possible — No authentication required
- Affects vLLM <0.18.0 on Linux and UNIX systems
- BSI issued warning on March 26, 2026
- GitHub advisory GHSA-7972-pg2x-xr59 published
Technical Breakdown
The vulnerability exists in two specific model implementation files:
- vllm/model_executor/models/nemotron_vl.py:430 — Hardcoded
trust_remote_code=Truein AutoModel.from_config() call - vllm/model_executor/models/kimi_k25.py:177 — Hardcoded
trust_remote_code=Truein cached_get_image_processor() call
These hardcoded values override the user's global --trust-remote-code=False security setting, completely undermining the intended security guarantee.
Impact on AI Security
- Remote code execution — Attackers can craft malicious model repositories
- Security bypass — User security opt-out is completely ineffective
- Supply chain attack — Compromised model repositories can execute arbitrary code
- Enterprise risk — Affects production AI inference deployments
Why This Matters for AI Infrastructure
vLLM is a critical component in the AI inference stack, widely used for high-performance LLM serving. This vulnerability demonstrates how subtle implementation errors in AI infrastructure can completely undermine security guarantees, exposing organizations to supply chain attacks through malicious model artifacts.
The hardcoded trust settings survived multiple previous security patches (CVE-2025-66448 and CVE-2026-22807), highlighting the challenge of securing complex AI tooling dependencies.
Remediation Steps
- Upgrade to vLLM ≥0.18.0 immediately
- Replace hardcoded values with
self.config.model_config.trust_remote_code - Implement strict model provenance — Only load models from trusted sources
- Network segmentation — Isolate AI inference systems from critical infrastructure
- Runtime monitoring — Detect unexpected process execution and network activity