JunoClaw — Critical Mnemonic Exposure in Agentic AI Platform (CVE-2026-43992)
AI relevance: JunoClaw's MCP tool definitions accepted BIP-39 seed mnemonics as plaintext string parameters, causing cryptographic wallet secrets to be embedded unencrypted in LLM tool-call JSON — visible to anyone with access to transport, logs, or telemetry between the LLM provider and MCP process.
What happened
- CVE-2026-43992 was published on May 12, 2026 with a CVSS score of 9.8 (Critical).
- JunoClaw is an agentic AI platform built on the Juno Network blockchain, using MCP tools for on-chain operations.
- Several MCP write tools —
send_tokens,execute_contract,instantiate_contract,upload_wasm, andibc_transfer— accepted amnemonic: stringparameter directly. - The BIP-39 seed phrase was passed unencrypted through the LLM tool-call JSON, exposing it across every layer in the communication path: network transport, system logs, and telemetry surfaces.
- Anyone intercepting or observing the LLM↔MCP communication channel could harvest the mnemonic and reconstruct the associated wallet.
- Fixed in JunoClaw version 0.x.y-security-1. No public proof-of-concept has been released.
Why it matters
- This is a textbook example of how agentic AI tool definitions can become credential-exposure sinks. The MCP tool schema declared the mnemonic as an input field, and the LLM faithfully included it in tool-call payloads — exactly as designed, but catastrophically insecure.
- It highlights a broader class of risks: when AI agents handle secrets as tool parameters, those secrets flow through infrastructure never designed for cryptographic material (JSON-RPC transports, LLM provider logs, observability pipelines).
- The flat authorization model of LLM tool calling means there's no additional gate — if the tool schema accepts a secret, the agent will pass it along in cleartext.
What to do
- If you run JunoClaw, upgrade to version 0.x.y-security-1 or newer immediately.
- Audit your own MCP tool schemas: any parameter that carries secrets (API keys, mnemonics, tokens, private keys) should use out-of-band secure reference mechanisms — not plaintext strings in tool-call JSON.
- Review logs and telemetry stores for historical exposure if you operated a vulnerable version.
- For any agentic AI platform handling blockchain or financial operations, enforce that credential material never traverses the LLM tool-call path.