Microsoft — Semantic Kernel RCE via Prompt Injection (CVE-2026-26030, CVE-2026-25592)

AI relevance: When an AI agent using Semantic Kernel's In-Memory Vector Store receives injected prompts, the framework passes attacker-controlled strings directly into Python eval() — turning a prompt injection into arbitrary code execution on the host.

Microsoft's Security Blog published findings on two vulnerabilities in Semantic Kernel, the open-source AI agent framework with over 27,000 GitHub stars used to orchestrate LLM tool calling, plugin management, and workflow chaining.

Key Findings

  • CVE-2026-26030 — The In-Memory Vector Store's default filter function builds a Python lambda via string interpolation and executes it with eval(). An attacker who can inject content into the agent's input controls the lambda expression and achieves RCE.
  • CVE-2026-25592 — A second injection flaw in the framework's tool parameter mapping chain, also enabling code execution through manipulated tool arguments.
  • Exploitation requires a prompt injection vector plus the agent having the Search Plugin backed by In-Memory Vector Store in its default configuration.
  • Microsoft demonstrated a single prompt launching calc.exe on the host — no browser exploit, no malicious attachment, no memory corruption.
  • The root cause is systemic: frameworks like Semantic Kernel, LangChain, and CrewAI translate natural-language model outputs into tool calls, and any framework that trusts unvalidated model output for code execution is vulnerable to the same class of attack.
  • Both CVEs have been patched. Microsoft plans to release similar findings for non-Microsoft frameworks in upcoming research.

Why It Matters

This is the clearest demonstration yet that prompt injection is not just a content-safety problem — it's an execution primitive. Any AI agent with tool access that ingests untrusted input (web pages, emails, uploaded documents, RAG sources) is at risk. Framework-level fixes are necessary because individual model guardrails cannot reliably prevent parameter-level injection when the model is behaving as designed.

What to Do

  • Update Semantic Kernel to the latest patched version immediately.
  • Audit all agent plugins for unsafe string interpolation or eval()-based processing of model-supplied parameters.
  • Treat all model-supplied tool arguments as untrusted input — validate, sanitize, and use parameterized calls instead of string construction.
  • Review agent logs for suspicious tool invocations that may indicate prior exploitation.

Sources