NVIDIA NVFlare CVE-2026-24178 — Critical Auth Bypass in Federated ML Training

AI relevance: NVIDIA NVFlare orchestrates distributed federated learning across GPU clusters — a CVSS 9.8 dashboard auth bypass lets unauthenticated attackers inject poisoned model gradients, exfiltrate training data, or hijack collaborative ML pipelines.

What happened

  • CVE-2026-24178 scores 9.8 (Critical) on the CVSS scale, affecting the NVIDIA NVFlare Dashboard's user management and authentication system.
  • The root cause is an authorization bypass triggered by manipulating a user-controlled key — the dashboard's validation logic fails to properly process this key, allowing complete circumvention of access controls.
  • An unauthenticated, network-reachable attacker can exploit the flaw to achieve privilege escalation, data tampering, information disclosure, code execution, and denial of service.
  • No prior authentication is required — the vulnerability is network-exploitable with no local access prerequisites.
  • The advisory does not specify affected versions, meaning every NVFlare Dashboard deployment should be considered at risk until patched.
  • NVFlare is NVIDIA's open-source framework for federated learning, used by healthcare, finance, and research organizations to train models across distributed datasets without centralizing sensitive data.

Why it matters

Federated learning deployments are specifically chosen when organizations cannot share raw training data — healthcare consortiums, banking partnerships, and multi-institutional research. The NVFlare Dashboard is the administrative control plane for these deployments. A critical auth bypass means:

  • Poisoned gradients: An attacker can inject malicious model updates that degrade or backdoor the final model, undermining the core promise of federated learning.
  • Training data inference: By manipulating the training pipeline, an attacker may be able to extract sensitive information from participant datasets through gradient analysis.
  • Regulatory impact: Healthcare and financial organizations running NVFlare face compliance obligations (HIPAA, PCI-DSS) that are directly threatened by unauthorized pipeline access.
  • Supply-chain amplification: Compromised federated models can propagate to downstream inference deployments, spreading the impact beyond the training environment.

What to do

  • Verify exposure: Check whether your organization runs NVFlare Dashboard. Look for the nvflare Python package or Docker containers exposing the dashboard service.
  • Restrict network access: At minimum, place NVFlare Dashboard behind a reverse proxy with strong authentication (OAuth, mTLS) and limit access to known admin IPs.
  • Monitor training integrity: If you operate federated learning pipelines, audit recent model updates for anomalies — unusual accuracy drops, gradient outliers, or unexpected weight changes may indicate tampering.
  • Watch for patches: Monitor NVIDIA's security advisories for a patched version. The lack of version-specific guidance in the initial advisory suggests this is an urgent, still-developing disclosure.

Sources