CVE-2026-44246 — nnUNet GitHub Issue Triage Agent Vulnerable to Prompt Injection

AI relevance: The nnUNet medical imaging framework's automated GitHub issue triage uses a Claude-based AI agent that reads issue descriptions and comments as input — without sanitization — allowing any authenticated GitHub user to inject malicious instructions that manipulate the agent into performing unintended authenticated repository actions.

What happened

  • CVE-2026-44246 (CWE-1427: Agent Manipulation) was assigned to MIC-DKFZ's nnUNet framework, a widely-used open-source medical imaging pipeline for MRI and CT segmentation.
  • The project's GitHub Actions workflow (.github/workflows/issue-triage.yml) automatically processes new issues using a Claude-powered agent that reads issue titles, bodies, and comments to label and respond.
  • Any authenticated GitHub user can submit a crafted issue containing prompt injection instructions — the agent treats user-supplied content as executable directives rather than data.
  • Successfully injected prompts cause the agent to go beyond its intended scope: posting unauthorized comments, relabeling issues, and potentially performing other authenticated repository actions.
  • The vulnerability was fixed in nnUNet v2.4.1. No public advisory was published beyond the CVE record.

Why it matters

This is a concrete example of indirect prompt injection in a production AI agent with authenticated GitHub Actions access. The pattern — automated GitHub workflows that feed untrusted user content directly to AI agents without input sanitization — is proliferating across open-source and enterprise repositories alike. nnUNet processes medical imaging data; a compromised triage agent could mislabel clinical bug reports, suppress security disclosures, or inject false information into issue history that downstream researchers rely on.

The broader lesson: any AI-powered GitHub Action that reads PR titles, issue bodies, or comments is inherently exposed to indirect prompt injection unless the pipeline explicitly sanitizes or isolates untrusted content before passing it to the model.

What to do

  • Update nnUNet to v2.4.1+ if you use the GitHub issue triage workflow.
  • Audit your own GitHub Actions workflows: any step that pipes issue/PR content into an AI agent needs input sanitization (strip HTML comments, escape special delimiters, or use structured data only).
  • Apply the principle of least privilege to AI agent tokens — the GitHub Actions GITHUB_TOKEN should be scoped to read-only on issues if the agent only needs to classify, not modify.
  • Consider requiring manual approval for AI-generated comments on security-sensitive repositories.

Sources