iProov Threat Intelligence Report — 1,151% Surge in iOS Deepfake Injection Attacks
AI relevance: Deepfake injection attacks against identity verification systems are surging 1,151% on iOS — attackers are using generative AI to mass-produce synthetic identities that bypass biometric onboarding, KYC, and remote identity checks at industrial scale.
iProov released its Threat Intelligence Report 2026, documenting how generative AI has transformed deepfake attacks from targeted social engineering into scalable infrastructure for identity fraud.
Key findings
- 1,151% surge in iOS-targeted injection attacks: Attackers inject fake video or biometric data directly into device camera feeds to bypass liveness detection and identity verification systems.
- 720% spike in Southeast Asia (Q3 2025): The region has become a testing ground for new techniques including virtual camera hacks and stolen identity data — methods that are subsequently deployed globally once proven effective.
- 41% of companies experienced deepfake attacks targeting executives: Per Ponemon Institute research cited in the report, deepfake impersonation of leadership is now a routine business risk.
- 37% of cybersecurity leaders encountered deepfakes in video calls: Per Gartner research, deepfake incidents during business video conferences are no longer rare events.
- Low-barrier tools fueling the surge: Platforms like Kling AI and others can create realistic video deepfakes from just a few source images, dramatically lowering the technical expertise required.
- Attacks spreading beyond social media: Deepfakes are now targeting video conferencing, identity verification, online transactions, and secure platform access — moving from novelty to operational infrastructure.
Why it matters
The shift from "deepfake as a trick" to "deepfake as identity attack infrastructure" has critical implications for AI security teams. As organizations deploy AI-powered identity verification, remote onboarding, and biometric authentication, they are building the exact systems that deepfake injection attacks target. The 1,151% iOS injection surge suggests that attackers are not just creating convincing deepfakes — they are developing systematic methods to bypass the liveness detection and camera-integrity checks that verification systems depend on. For teams operating AI verification pipelines or deploying agent-based identity workflows, this is a direct threat to core infrastructure.
What to do
- Audit identity verification flows for camera injection vulnerabilities — virtual camera hijacking is the primary vector on mobile platforms.
- Implement multi-modal liveness detection that combines facial analysis with behavioral signals, not just passive image analysis.
- Monitor for Southeast Asia-sourced attack patterns, which serve as early indicators for techniques likely to spread globally.
- Review the full iProov report (link) for specific injection technique classifications and mitigation guidance.