TeamPCP — Hackers Offer Stolen Mistral AI Source Code for $25K on BreachForums
AI relevance: Mistral AI is one of Europe's leading AI model developers — the theft of 450 internal repositories (5 GB) raises concerns about stolen model architectures, training infrastructure, or proprietary research leaking to competitors or being used for adversarial purposes.
What happened
The TeamPCP hacker group is offering nearly 450 Mistral AI repositories for $25,000 on a BreachForums thread. The hackers claim to have stolen approximately 5 GB of internal source code and repositories. The sale is exclusive — only one buyer will receive the data. TeamPCP also invited Mistral AI itself to purchase the stolen code, and stated that if no buyer is found within a week, everything will be leaked publicly on the forums for free.
Mistral AI has confirmed a security breach but has not disclosed what specific data was accessed or exfiltrated.
Context
- TeamPCP has been active since at least February 2026, running coordinated supply-chain attacks across npm, PyPI, GitHub, Docker Hub, and Open VSX.
- The group previously compromised the TanStack npm ecosystem, the Mistral AI PyPI package ecosystem, and SAP CAP packages as part of the Mini Shai-Hulud campaign.
- TeamPCP also released the Shai-Hulud worm source code publicly and launched a $1K BreachForums contest encouraging copycat supply-chain attacks.
- The group has targeted build systems and CI/CD pipelines to steal developer and cloud credentials at scale.
Why it matters
- Stolen AI company source code could contain model architectures, training data pipelines, evaluation frameworks, or proprietary safety research — all highly valuable to competitors or adversarial actors.
- This is part of a broader pattern where threat groups are specifically targeting AI/ML companies for intellectual property theft, not just financial gain.
- The exclusive-sale-plus-leak-deadline tactic creates pressure on the victim company while maximizing the attacker's potential return.
- If leaked, the source code could reveal security-relevant details about Mistral's internal infrastructure, potentially enabling further targeted attacks.
What to do
- Mistral AI customers should monitor for any anomalous behavior and rotate any credentials that may have been stored in the compromised repositories.
- Organizations using Mistral models should assess whether leaked source code could expose integration details or API patterns specific to their deployments.
- This incident underscores the importance of securing AI company developer infrastructure with the same rigor applied to production model-serving systems.