Google GTIG — AI Threat Tracker: adversarial use update
AI relevance: The GTIG report documents real-world misuse of LLMs for reconnaissance, phishing, and tooling by state-backed actors targeting AI-enabled environments.
- GTIG reports late-2025 attackers increasingly integrating AI to accelerate reconnaissance, social engineering, and malware/tooling development.
- State-backed actors from DPRK, Iran, PRC, and Russia used LLMs for technical research, targeting, and rapid phishing lure generation.
- AI-augmented phishing includes multi-turn “rapport-building” conversations and fast OSINT-driven target profiling.
- GTIG observed the COINBAIT phishing kit, likely accelerated by AI code generation, masquerading as a crypto exchange for credential theft.
- Google and DeepMind observed frequent model extraction (“distillation”) attempts and reported active disruption of abuse.
- GTIG notes no breakthrough capabilities yet, but continued integration of AI into real campaigns.
Why it matters
- This moves AI misuse from demos to operations: phishing and recon are already being scaled with LLMs.
- Model extraction attempts threaten the IP and safety controls of AI services.
- Defenders need AI-specific telemetry and abuse prevention, not just traditional email security.
What to do
- Add AI misuse scenarios (phishing, recon, tooling) to threat models and red-team exercises.
- Harden model endpoints with abuse monitoring, rate limits, and anomaly detection for extraction patterns.
- Train staff on AI-generated phishing and multi-turn social engineering tactics.
- Monitor public footprint (OSINT) that can feed AI-augmented targeting.