A vulnerability known as "log poisoning" has been fixed in OpenClaw AI Agent Log Poisoning, a rapidly emerging open-source AI assistant that can connect to messaging, cloud services, and local system tools. This vulnerability could allow remote attackers to insert malicious, user-controlled content into logs that the agent may subsequently consume. The problem affects OpenClaw versions before 2026 and is detailed in the OpenClaw Security Advisory.2.13.

An indirect prompt-injection attack, rather than traditional remote code execution, poses the main risk since it involves writing untrusted input into logs, which an AI agent may then use as a trusted troubleshooting context.

This is significant because privileged automation and "untrusted" data sources frequently collide, and OpenClaw's "agent gateway" model has the potential to reveal a potent control plane.JFrog+1 OpenClaw logged specific WebSocket request headers, such as Origin and User-Agent, without sufficient sanitization in impacted versions, according to the Eye Security advisory. An attacker could send crafted header values that are verbatim embedded in log lines if they are able to access an OpenClaw gateway interface. This would result in a "poisoned" log trail that continues after the initial connection attempt.

In workflows where operators ask the agent to diagnose errors and the agent pulls recent logs into its reasoning context, the practical impact is contingent on how logs are consumed downstream.

In that case, injected content might be mistaken for trusted system messages, operator instructions, or structured records, which could guide troubleshooting procedures, affect choices, or change the agent's summary of events. Thousands of exposed instances are available online by simply searching OpenClaw's default port (18789) on Shodan, exposing a sizable and expanding attack surface for opportunistic probing. Log poisoning is appealing even though exploitation is "context-dependent" because it can be carried out repeatedly and at a low cost, and it targets the interpretation of the AI layer rather than a single memory corruption flaw.

Mitigations The advisory specifically notes that versions before 2026 were affected, and OpenClaw fixed the problem in version 2026.2.13.2.13 are impacted.

Upgrades to 2026.2.13 (or later) should be prioritized by OpenClaw teams. After that, gateway exposure should be examined to make sure the service cannot be accessed from the public internet without robust access controls. In order to prevent the model from automatically ingesting raw, attacker-influenced telemetry, defenders should also treat agent-consumable logs as an untrusted input channel and implement standard hardening patterns, such as sanitizing or redacting user-controlled header fields prior to logging, capping header sizes to minimize payload room, and separating "human debugging logs" from "agent reasoning inputs."

Since these can be early signs of attempted poisoning, whenever feasible, set up monitoring for odd header patterns and spikes in unsuccessful WebSocket connections. X, LinkedIn, and LinkedIn for daily ZeroOwl. To have your stories featured, get in touch with us.