Material Security — Unmanaged OAuth Grants from AI Tools Create Persistent Attack Surface
AI relevance: Every AI agent, coding assistant, and analytics tool that employees connect to Google Workspace or Microsoft 365 receives a persistent OAuth token — no expiration, no automatic cleanup, and invisible to most perimeter controls, creating a systemic AI-ops identity risk.
What the research shows
- 80% of security leaders consider unmanaged OAuth grants a critical or significant risk, according to new research from Material Security.
- 45% of organizations do nothing to monitor OAuth grants at scale; 33% rely on manual processes (spreadsheets, ad-hoc reviews).
- OAuth grants don't expire when employees leave, don't reset on password changes, and bypass MFA entirely — the attacker presents a previously-authorized token, not credentials.
- The Drift incident (UNC6395, tracked by Palo Alto Unit 42) demonstrates the real-world impact: stolen OAuth refresh tokens from a trusted sales engagement platform gave attackers access to Salesforce environments across 700+ organizations, including Cloudflare and PagerDuty.
- Attackers systematically exported data and harvested downstream credentials — AWS access keys, Snowflake tokens, and passwords — from within the compromised Salesforce instances.
- The attack used a legitimate, previously-trusted integration: from any perimeter control's perspective, nothing was wrong.
Why it matters for AI operations
The proliferation of AI tools — coding assistants (Copilot, Cursor, Claude Code), analytics platforms (Context.ai and dozens more), and workflow automations — has massively expanded the OAuth grant surface. Each tool receives scoped but persistent tokens that operate independently of the identity provider's security controls. When one of those tools is compromised (as in the Drift and Vercel incidents), the blast radius spans every organization that granted it access. Traditional app-installation-time checks — reviewing permission scopes, flagging low-reputation vendors — catch nothing when a trusted app's own credentials are later stolen.
What to do
- Inventory all OAuth grants across Google Workspace and Microsoft 365 — treat this as a recurring audit, not a one-time review.
- Monitor token behavior, not just scopes — API call patterns and data access volumes reveal compromise even for previously-trusted apps.
- Revoke unused grants immediately — any AI tool no longer actively in use should have its OAuth permissions withdrawn.
- Implement automated OAuth grant management — spreadsheets don't scale; deploy tooling that continuously monitors, alerts, and revokes.
- Apply least-privilege scoping — when connecting AI tools, grant only the specific permissions required, not blanket workspace access.