Veza — Access Agents for AI identity governance
AI relevance: Veza’s Access Agents and AI Agent Security features target identity and permission risks created by autonomous agents and MCP-connected tools, which are common sources of tool abuse and data-exfiltration paths in agent deployments.
- Veza introduced Access Agents, a set of AI agents that automate identity governance tasks using the company’s Access Graph.
- The agents include a prompt-based interface for risk discovery, a search agent for permission graph exploration, and a review agent to prioritize high-risk access decisions.
- Veza says the agents run on AWS Bedrock and dynamically choose models based on the task’s reasoning needs.
- Updates to AI Agent Security expand tool discovery beyond MCP servers to individual tool actions and the data resources those actions can reach.
- New features include suggested human ownership for “shadow AI” identities, blast-radius visualization for agent permissions, and AISPM mapping to NIST’s AI RMF.
- The release emphasizes identity governance for AI agents as a prerequisite for safe, scalable deployment across SaaS and cloud environments.
Why it matters
- Agent deployments often fail at the identity layer — unmanaged service accounts and over-scoped permissions create hidden paths for prompt injection to become real-world access.
- Tool-level discovery and blast-radius mapping help security teams quantify what an agent can actually do before incident response is required.
What to do
- Inventory AI agent identities and tool permissions across SaaS, cloud, and MCP servers; treat agent access like privileged IAM.
- Map action-level blast radius for each agent before enabling autonomous workflows, especially for write or admin actions.
- Assign accountable owners to every agent identity and service account to avoid “shadow AI” sprawl.