TheHackerWire — MLflow RCE via model artifact command injection (CVE-2025-15379)

AI relevance: MLflow is a critical ML lifecycle platform; RCE via model artifacts threatens AI supply chains and model serving infrastructure.

  • CVSS 10.0: CVE-2025-15379 is a maximum severity command injection vulnerability affecting MLflow 3.8.0
  • Attack vector: Malicious model artifacts containing poisoned python_env.yaml files
  • Root cause: Unsanitized shell command interpolation in _install_model_dependencies_to_env() function
  • Exploitation requirement: MLflow deployment configured with env_manager=LOCAL
  • Impact: Full remote code execution on MLflow servers during model deployment
  • Network exploitable: No authentication required (PR:N, AV:N)
  • Timeline: Published March 30, 2026; affects MLflow version 3.8.0
  • Attack scenario: Attacker uploads poisoned model → MLflow processes dependencies → shell command injection executes arbitrary code
  • Metacharacter abuse: Dependency specifications in YAML can contain shell metacharacters like |, &, ; for command chaining
  • Container escape risk: Successful exploitation could escape MLflow's container environment

Why it matters

  • ML lifecycle criticality: MLflow is widely used for managing ML experiments, model registry, and deployment pipelines
  • Supply chain attack surface: Model artifacts represent a new attack vector in ML supply chains
  • Trust boundary violation: Dependency installation should be a trusted process, not accepting arbitrary user input
  • Automation risk: CI/CD pipelines that automatically deploy models become vulnerable to compromise
  • Credential exposure: RCE could expose cloud credentials, model weights, and training data
  • AI infrastructure impact: Compromised MLflow servers could affect downstream AI services and applications

What to do

  1. Upgrade immediately: Check MLflow version and upgrade if running 3.8.0
  2. Audit model sources: Only deploy models from trusted sources and repositories
  3. Review deployment configs: Check if env_manager=LOCAL is used in production environments
  4. Network segmentation: Isolate MLflow servers from sensitive network segments
  5. Monitor artifact processing: Implement logging for model deployment and dependency installation
  6. Container hardening: Run MLflow in minimal containers with reduced privileges
  7. Supply chain verification: Implement checksum verification for model artifacts
  8. Emergency response: Have incident response plans for ML infrastructure compromises

Sources