SecureAuth — Agent Trust Registry Opens to the Public for AI Agent Governance
AI relevance: SecureAuth's Agent Trust Registry provides the first vendor-neutral security evaluations of enterprise AI agents, giving CISOs a way to assess agent trust posture before deployment — addressing the governance gap as agents proliferate with broad data access across Salesforce, HR systems, and internal file stores.
- SecureAuth opened its Agent Trust Registry to the public on April 29, 2026 — described as the industry's first open, vendor-neutral directory of AI agent security evaluations.
- For each listed agent, the Registry surfaces verified identity posture, trust scores, governance metadata, and deployment recommendations — giving security teams an independent assessment before approving agents for enterprise use.
- The announcement aligns with Gravitee's State of AI Agent Security 2026 Report, which found only 14.4% of AI agents go live with full security approval and 88% of enterprises have experienced AI agent-related security incidents.
- SecureAuth CEO Geoff Mattson highlighted the core architectural problem: LLM architectures intermingle the data and control layers, meaning malicious instructions in documents, emails, or data feeds can hijack agent behavior — the prompt injection attack class.
- The Registry is positioned alongside community initiatives like Anthropic's Project Glasswing and Mythos, emphasizing open, collaborative defense against emerging AI threats.
- As agents gain broader access to enterprise datasets — Salesforce, HR systems, internal file stores — SecureAuth warns that "there is no security layer sitting between these agents and those systems."
Why it matters
The agent trust problem mirrors the early days of application security — organizations deployed software without standardized security assessments. A public, vendor-neutral registry of AI agent security evaluations gives enterprises a baseline for due diligence. While registry scores are only as useful as their methodology, this represents the first coordinated attempt to create transparency around agent security posture across the industry.
What to do
- Review the Agent Trust Registry evaluations for any AI agents currently deployed or under consideration in your organization.
- Map AI agent access to sensitive data stores and verify that authorization policies exist for each agent-to-system connection.
- Assess whether your organization has a formal AI agent approval process — 85.6% of agents currently lack one, per Gravitee's findings.
- Evaluate prompt injection mitigations for any agent that ingests external content (documents, emails, web pages).