OpenClaw
Agent platform with machine-checked security models
OpenClaw is an agent infrastructure platform that ships with formal, TLA+-based security models for its highest‑risk capabilities. You can run attacker-driven model checks as a regression suite to validate assumptions about authorization, session isolation, tool gating, and misconfiguration safety. Models live in a separate repo and are reproducible locally via simple CLI commands.
Ask about OpenClaw
Get answers based on OpenClaw's actual documentation
Try asking:
About
OpenClaw is an agent infrastructure and gateway platform aimed at developers who are building and operating AI agents across channels, tools, and models. From the documentation, it focuses heavily on security properties around authorization, session isolation, tool execution, and safe deployment configurations.
A core part of the offering is a set of formal security models written in TLA+ and checked with the TLC model checker. These models act as an executable, attacker-driven security regression suite that encodes specific claims about how OpenClaw should behave under various threat scenarios. To get started, you clone the separate openclaw-formal-models GitHub repository, ensure you have Java 11+ installed, and run make targets that invoke a vendored tla2tools.jar to explore bounded state spaces.
According to the documentation, the formal models cover concrete, high‑risk areas such as gateway exposure and misconfiguration, the nodes.run execution pipeline (including approval tokenization), pairing-store behavior for DM gating, ingress gating in group contexts, and routing/session-key isolation. Each claim has a “green” model that should pass under the intended design, and many have a paired “negative” model that demonstrates realistic failure modes via counterexample traces. This structure makes it useful both as a regression suite and as a way to reason about what can go wrong when configuration or assumptions are violated.
However, these are models, not the actual TypeScript implementation, and the docs are explicit that drift between the models and production code is possible. The checks are bounded by the explored state space, so a successful run does not prove global security—only that the modeled behavior holds under given assumptions and limits. Some claims also depend on correct deployment and configuration in the surrounding environment. There is mention of potential future CI integrations and hosted "run this model" workflows, but those are not available yet. Pricing and commercial terms for OpenClaw are not described in this documentation, so you should treat it as a technical security artifact rather than a complete product overview.
Handles multi-step tasks with guidance
OpenClaw is an infrastructure and gateway platform for operating AI agents with strong emphasis on secure tool execution, routing, and authorization. It enables agents to perform powerful, multi-system actions through a controlled nodes.run pipeline and gateway, but the provided documentation centers on formal security models rather than on autonomous planning or learning. As a result, it scores high on action capability and safety/permissions, moderate on persistent state, and low on autonomy and adaptation, reflecting a secure, tool-rich platform rather than a self-directed agent.
Total score: 3 (Action) + 1 (Autonomy) + 1 (Adaptation) + 2 (State) + 3 (Safety) = 10 → Level 2 (Capable agent, moderate autonomy) when viewed as an enabling platform for agentic systems.
Score Breakdown
Categories
Ask about OpenClaw
Try asking:
Pricing not publicly available
https://agentic-directory.onrender.com/t/openclaw