Keyfactor — Two-thirds of enterprises say AI agents are a bigger security risk than humans

• Category: Security

AI relevance: AI agents that autonomously access systems, invoke APIs, and interact with other agents require cryptographic identity and credential governance — without it, every deployed agent is an unauditable, unrevocable insider with no kill switch.

  • 69% of cybersecurity professionals say vulnerabilities in AI agents and autonomous systems are a greater threat than human misuse of AI, per Keyfactor's Digital Trust Digest: AI Identity Edition (450 respondents, North America and Europe, companies with 1,000+ employees).
  • 86% agree AI agents cannot be fully trusted without unique, dynamic digital identities — yet most organizations lack the PKI and credential infrastructure to issue, rotate, and revoke agent identities at scale.
  • Only 28% believe they can actually prevent a rogue agent from causing damage. The gap between risk awareness and operational readiness is the widest the report has measured for any identity class.
  • 55% of security leaders say their C-suite is not taking agentic AI risks seriously enough, creating a board-level recognition-action gap where budgets and staffing lag behind the threat model.
  • 85% expect digital identities for AI agents will be as common as human and machine identities within five years — but the infrastructure to manage them doesn't exist yet in most environments.
  • Vibe-coding blind spot: 68% of organizations lack full visibility or governance over AI-generated code contributions, meaning AI assistants are writing production code without cryptographic provenance, auditable attribution, or enforceable identity boundaries.
  • Keyfactor argues every AI contribution should carry a cryptographic fingerprint, every code path should have auditable provenance, and every agent should operate with revocable credentials — applying code-signing and PKI principles to agentic workflows.

Why it matters

  • This is the first large-scale quantitative survey that puts hard numbers on the AI agent identity governance gap. Prior reporting from Cyata, Nudge Security, and others was qualitative — this data shows the problem is industry-wide.
  • The 28% "can stop a rogue agent" figure is stark: nearly three-quarters of enterprises have deployed agents they cannot shut down or trace if something goes wrong. That's not a theoretical risk — it's a live incident response gap.
  • The vibe-coding angle connects AI agent security to software supply-chain integrity: if 68% of orgs can't attribute AI-generated code, poisoned contributions can enter production without detection.

What to do

  • Issue unique identities per agent: treat each AI agent as a distinct identity with its own certificate or credential — do not share service accounts across agents.
  • Implement revocation infrastructure: ensure every agent credential can be revoked instantly and independently. If you can't kill an agent's access in under a minute, your incident response plan has a gap.
  • Enforce code provenance: require cryptographic signing for AI-generated code commits. Attribute every contribution to the specific agent and model version that produced it.
  • Brief the board: the 55% C-suite awareness gap means security teams need to translate agentic identity risk into business language — rogue agents aren't an abstract threat, they're unauditable insiders with API access.
  • Plan for scale: if 85% expect agent identities to be as common as human identities within five years, start PKI capacity planning now.

Links