OP

OpenClaw

Agent platform with machine-checked security models

OpenClaw is an agent infrastructure platform that ships with formal, TLA+-based security models for its highest‑risk capabilities. You can run attacker-driven model checks as a regression suite to validate assumptions about authorization, session isolation, tool gating, and misconfiguration safety. Models live in a separate repo and are reproducible locally via simple CLI commands.

Web
API
Integrations
Multi-Agent
B2B
For Developers
Visit OpenClaw

Ask about OpenClaw

Get answers based on OpenClaw's actual documentation

Try asking:

About

What It Is

OpenClaw is an agent infrastructure and gateway platform aimed at developers who are building and operating AI agents across channels, tools, and models. From the documentation, it focuses heavily on security properties around authorization, session isolation, tool execution, and safe deployment configurations.

A core part of the offering is a set of formal security models written in TLA+ and checked with the TLC model checker. These models act as an executable, attacker-driven security regression suite that encodes specific claims about how OpenClaw should behave under various threat scenarios. To get started, you clone the separate openclaw-formal-models GitHub repository, ensure you have Java 11+ installed, and run make targets that invoke a vendored tla2tools.jar to explore bounded state spaces.

What to Know

According to the documentation, the formal models cover concrete, high‑risk areas such as gateway exposure and misconfiguration, the nodes.run execution pipeline (including approval tokenization), pairing-store behavior for DM gating, ingress gating in group contexts, and routing/session-key isolation. Each claim has a “green” model that should pass under the intended design, and many have a paired “negative” model that demonstrates realistic failure modes via counterexample traces. This structure makes it useful both as a regression suite and as a way to reason about what can go wrong when configuration or assumptions are violated.

However, these are models, not the actual TypeScript implementation, and the docs are explicit that drift between the models and production code is possible. The checks are bounded by the explored state space, so a successful run does not prove global security—only that the modeled behavior holds under given assumptions and limits. Some claims also depend on correct deployment and configuration in the surrounding environment. There is mention of potential future CI integrations and hosted "run this model" workflows, but those are not available yet. Pricing and commercial terms for OpenClaw are not described in this documentation, so you should treat it as a technical security artifact rather than a complete product overview.

Key Features
Formal security models for OpenClaw’s behavior written in TLA+
Attacker-driven security regression suite with explicit claims and counterexamples
Separate `openclaw-formal-models` GitHub repo containing all models
Bounded model checking via TLC using a vendored `tla2tools.jar`
Makefile targets to run specific security claims (e.g., gateway exposure, routing isolation)
Use Cases
Security-conscious developers running formal regression checks when changing OpenClaw’s gateway or routing configuration
Platform engineers validating that `nodes.run` execution and approval flows match intended security properties before deployment
Security teams using negative models to study realistic misconfiguration and bug classes in agent infrastructure
Agenticness Score
10/ 20
Level 2: Capable

Handles multi-step tasks with guidance

OpenClaw is an infrastructure and gateway platform for operating AI agents with strong emphasis on secure tool execution, routing, and authorization. It enables agents to perform powerful, multi-system actions through a controlled nodes.run pipeline and gateway, but the provided documentation centers on formal security models rather than on autonomous planning or learning. As a result, it scores high on action capability and safety/permissions, moderate on persistent state, and low on autonomy and adaptation, reflecting a secure, tool-rich platform rather than a self-directed agent.

Total score: 3 (Action) + 1 (Autonomy) + 1 (Adaptation) + 2 (State) + 3 (Safety) = 10 → Level 2 (Capable agent, moderate autonomy) when viewed as an enabling platform for agentic systems.

Score Breakdown

Action Capability
3/4
Autonomy
1/4
Adaptation
1/4
State & Memory
2/4
Safety
3/4

Categories

Pricing

Pricing not publicly available

Details
Website: openclaw.ai
Added: February 10, 2026
Last Verified: February 10, 2026
Agenticness: 10/20 (Level 2)
Cite This Listing
Name: OpenClaw
URL: https://agentic-directory.onrender.com/t/openclaw
Last Updated: February 10, 2026

Related Tools