Manifold — $8M seed to secure autonomous AI agents at runtime

AI relevance: Coding agents and autonomous AI workers on endpoints bypass traditional EDR visibility because their normal activity (executing shells, calling APIs, reading codebases) already looks malicious — creating a gap that Manifold aims to fill with runtime agent-specific detection.

  • Manifold closed an $8M seed led by Costanoa Ventures, with participation from Cherry Ventures, Rain Capital, and angel investors including ex-Uber CSO Joe Sullivan and ex-DeepMind CISO Vijay Bolina.
  • The team previously built LLM Guard at Laiyer AI (acquired by Protect AI, later Palo Alto), the most widely adopted open-source LLM firewall.
  • Manifold's core thesis: first-gen AI security (guardrails, classifiers, gateways) monitors text at the inference point and is blind to agentic actions — tool calls, filesystem access, CI/CD interactions.
  • The platform provides runtime visibility into every agent in an environment: tools called, systems accessed, MCP server connections, with anomaly detection.
  • Key market signal: 85% of developers already use coding agents (Copilot, Claude Code, Cursor), and agent adoption is expanding to every knowledge worker via Claude Cowork, OpenClaw, and similar tools.
  • Developers already represent an EDR blind spot — their legitimate activity routinely gets security exceptions, and agents inherit those broad permissions.
  • Manifold is agentless, deploys in days, and uses existing infrastructure — no proxies or gateways required.
  • The company positions "Agentic AI Detection and Response" (AIDR) as a new category distinct from traditional endpoint security.

Why it matters

  • Agent tool ecosystems (MCP servers, skills, plugins) are expanding faster than security controls. A single compromised agent action can cascade across production systems.
  • Natural-language guardrails produce high false-positive rates and miss behavioral anomalies in agent execution — the exact gap Manifold targets.
  • The $8M seed and heavyweight investor roster signal that agentic endpoint security is being treated as enterprise infrastructure, not a research project.

What to do

  • Audit agent access: Inventory which agents run on endpoints, what permissions they hold, and what MCP servers/tools they connect to.
  • Treat agents as privileged identities: Apply least-privilege and segment agent access to production, source code, and CI/CD pipelines.
  • Evaluate runtime monitoring: If you run coding agents or autonomous workers at scale, assess tools that provide behavioral visibility beyond prompt/output logging.

Sources