Red Hat RHEL AI — Two InstructLab CVEs: Path Traversal & trust_remote_code RCE
AI relevance: Red Hat's enterprise AI platform (RHEL AI 3) bundles InstructLab for local model fine-tuning — two newly disclosed CVEs expose both file system access and arbitrary code execution paths in organizations running AI training pipelines.
What was found
Two vulnerabilities were disclosed in InstructLab, the open-source fine-tuning framework at the core of RHEL AI:
- CVE-2026-6855 — Path traversal in chat session handler. A local attacker can manipulate the
logs_dirparameter to escape the intended directory scope, reading or writing files outside the chat session's sandbox. This affects the session logging component used during interactive InstructLab runs. - CVE-2026-6859 — Arbitrary code execution via hardcoded trust_remote_code. The
linux_train.pyscript hardcodestrust_remote_code=Truewhen loading models from HuggingFace Hub. A remote attacker who publishes a crafted model can achieve arbitrary Python code execution when a user runsilab train,ilab download, orilab generatepointing to the malicious repository.
Why it matters
- RHEL AI targets enterprise teams fine-tuning models on-premises — the exact audience that expects hardened, security-reviewed tooling.
- The trust_remote_code flaw means any model pulled from HuggingFace can execute code at download time, turning a standard workflow step into a supply-chain attack vector.
- Organizations using RHEL AI in regulated environments may unknowingly expose internal systems through the path traversal, especially where InstructLab runs under shared service accounts.
- Both flaws require no network-facing service — they're triggered through normal InstructLab usage, making them harder to detect via perimeter monitoring.
What to do
- Apply Red Hat patches as soon as they are available via your RHEL channels.
- Until patched, restrict InstructLab to models from trusted, allowlisted HuggingFace repositories — verify by repository name and cryptographic hash.
- Audit InstructLab logs directories for anomalous file writes outside expected paths.
- Consider running InstructLab sessions in isolated containers with strict filesystem boundaries, even in "trusted" internal environments.
- Review
ilabcommand history for any pulls from unfamiliar HuggingFace sources.