PyTorch Lightning — PyPI Package Compromised in Mini Shai-Hulud Supply Chain Attack
AI relevance: PyTorch Lightning is a widely adopted deep learning training framework used by ML researchers and engineers — compromising it gives attackers direct access to GPU compute environments, cloud training credentials, and the model development pipelines of AI teams.
What happened
- The
lightningPyPI package (PyTorch Lightning, 31,100+ GitHub stars) was compromised with malicious versions 2.6.2 and 2.6.3, published on April 30, 2026. Version 2.6.1 is clean. - PyPI administrators have quarantined the project. The attack is attributed to the Mini Shai-Hulud campaign, which previously compromised LiteLLM (March 24), Telnyx (March 27), Xinference, and SAP-related npm packages.
- The malicious package contains a hidden
_runtimedirectory with a downloader and obfuscated JavaScript payload that executes automatically onimport lightning— requiring no additional user action. - The attack chain runs a Python script (
start.py) that downloads and executes the Bun JavaScript runtime, which then runs an 11 MB obfuscated payload (router_runtime.js) for comprehensive credential theft. - Harvested GitHub tokens are validated against the GitHub API, then used to inject a worm-like payload into up to 50 branches per repository — each commit authored using a hardcoded identity that impersonates Anthropic's Claude Code.
- The malware also implements an npm-based propagation vector: it modifies the developer's local
package.jsonwith apostinstallhook to invoke the malicious payload, enabling lateral spread through the Node.js ecosystem. - A parallel compromise of the
intercom-clientnpm package was detected in the same campaign.
Why it matters
- PyTorch Lightning is foundational infrastructure for training and deploying ML models — any developer or CI/CD pipeline importing the package during the malicious window had their environment compromised.
- The attacker gains access to cloud training credentials (AWS, GCP, Azure), model API keys, and GPU cluster tokens — enabling follow-on attacks on AI model training infrastructure.
- The worm-like GitHub repo poisoning impersonating Claude Code commits is a novel social engineering layer: poisoned commits appear to come from a trusted AI coding assistant.
- This is the latest escalation in the Mini Shai-Hulud campaign's systematic targeting of the AI/ML toolchain — each compromise expands the attacker's reach into adjacent parts of the ecosystem.
What to do
- Do not install
lightningversions 2.6.2 or 2.6.3. Pin to version 2.6.1 or wait for a clean release from the maintainers. - If you installed a compromised version: rotate all credentials found on the affected machine (GitHub tokens, cloud API keys, SSH keys, crypto wallets, npm tokens).
- Check your GitHub repositories for unauthorized commits authored by Claude Code — the attacker's hardcoded identity may have poisoned your branches.
- Audit local
package.jsonfiles for unexpectedpostinstallscripts that may have been injected by the npm propagation vector. - Monitor CI/CD logs for unexpected imports of the compromised package during the April 30 window.