OpenAI Breached in TanStack Supply Chain — Code-Signing Certificates Rotated, macOS Users Must Update
AI relevance: A leading AI company's developer workstations were directly compromised by the Mini Shai-Hulud supply-chain campaign, exposing code-signing certificates used for ChatGPT Desktop, Codex, and Atlas applications — demonstrating how AI company CI/CD pipelines are now prime targets for supply-chain attackers.
What happened
- OpenAI published an advisory confirming that two employee devices were breached during the Mini Shai-Hulud TanStack npm supply-chain attack, with unauthorized access to internal source code repositories.
- The company found credential-focused exfiltration activity consistent with the malware's publicly described behavior, including theft of GitHub tokens, npm publish tokens, AWS credentials, and SSH keys.
- OpenAI rotated all code-signing certificates for ChatGPT Desktop, Codex App, Codex CLI, and Atlas on macOS, Windows, iOS, and Android as a precaution.
- macOS users must update their OpenAI desktop apps before June 12, 2026, or older versions signed with revoked certificates may fail to launch due to Apple's notarization process.
- Affected versions include ChatGPT Desktop 1.2026.125, Codex App 26.506.31421, Codex CLI 0.130.0, and Atlas 1.2026.119.1.
- OpenAI states no customer data, production systems, or intellectual property were accessed, and there is no evidence that stolen credentials were used in further attacks.
- The company isolated affected systems, revoked sessions, rotated repository credentials, temporarily restricted deployment workflows, and engaged a third-party incident response firm.
Why it matters
This is a textbook example of how supply-chain attacks on open-source tooling cascade into direct breaches at AI companies. The Mini Shai-Hulud malware modified Claude Code hooks and VS Code auto-run tasks to establish persistence that survives package removal, specifically targeting AI developer workflows. The fact that code-signing certificates for consumer-facing AI products were exposed highlights a critical dependency: AI companies' build infrastructure is only as secure as the open-source packages their developers install.
What to do
- macOS users: Update all OpenAI desktop applications (ChatGPT, Codex, Atlas) before June 12 to avoid notarization failures.
- AI/ML engineers: Audit your development machines for Claude Code hooks and VS Code auto-run tasks modified by unknown parties.
- Package maintainers: Review GitHub Actions workflows and CI/CD pipelines for OIDC token abuse vectors — this is how the attackers escalated from one project to hundreds.
- Security teams: Rotate any credentials that may have been exposed through compromised npm/PyPI packages in your dependency tree.