The moment your agent runtime can reach the open internet, your threat surface expands to everything the model can be prompted to do. For Openclaw deployments that handle real workflows, real credentials, and real user data, that is not an acceptable default. We containerize every agent process and keep the entire execution environment airgapped from day one.
Why Docker for Openclaw
Docker gives us three things that matter for agent security: process isolation, network policy enforcement, and reproducible builds. Every Openclaw worker runs as a single-purpose container. It cannot see the host filesystem, cannot reach other containers unless explicitly networked, and cannot install new packages at runtime.
This is not about convenience. It is about making it structurally difficult for a compromised or misaligned agent to cause damage beyond its container boundary.
Airgapped by default
Our Openclaw containers have no outbound internet access. The Docker network they run on is internal-only. If the agent needs to call an LLM API, that request goes through a dedicated proxy container that whitelists exactly one endpoint and enforces rate limits. Everything else is dropped at the network level.
This matters because agent prompts can be manipulated. If an attacker or a malformed input can convince the agent to make an HTTP request, the airgap ensures that request goes nowhere. The proxy is the only exit, and it only speaks to the inference endpoint.
Container hardening checklist
We apply the same hardening pattern to every container in the agent stack. This is not negotiable and not environment-specific. Dev, staging, and production all run the same security posture.
- Read-only filesystem. The root filesystem is mounted read-only. Temporary writes go to a size-limited tmpfs with noexec. The agent cannot persist anything to disk outside its allocated scratch space.
- All capabilities dropped. We start with
cap_drop: ALLand add back only what is strictly required. Most Openclaw workers need zero Linux capabilities. - Non-root user. Containers run as UID 1001. The Dockerfile creates a dedicated user and the compose file enforces it. Root inside the container is never available.
- No privilege escalation. The
no-new-privilegessecurity option prevents any process inside the container from gaining additional privileges through setuid binaries or other mechanisms. - Resource limits. Memory and CPU are hard-capped. A runaway agent process hits the limit and gets OOM-killed rather than consuming host resources.
- Pinned images. We never pull
:latest. Every image is pinned to a specific digest and stored in our internal registry. Supply chain attacks via tag mutation are eliminated.
Secrets management
API keys and credentials are never baked into images or passed as environment variables in compose files. We use Docker secrets mounted as read-only files at a known path. The agent reads the key once at startup and the file is accessible only to the container's non-root user.
For the LLM API key specifically, the key lives only in the proxy container. The Openclaw worker never sees the raw API key. It sends requests to the proxy over the internal network, and the proxy injects the authentication header before forwarding to the inference endpoint.
Image supply chain
We build Openclaw images from a known base in CI. The build pipeline pins the base image by digest, runs a vulnerability scan, signs the resulting image, and pushes it to our internal registry. Production nodes only pull from the internal registry. There is no path from Docker Hub to production.
- Base image pinned by sha256 digest, not tag.
- Trivy scan runs in CI. Any critical or high severity CVE blocks the build.
- Signed with cosign. The deployment pipeline verifies the signature before pulling.
- Internal registry is the only source for production. External registries are unreachable from the deployment network.
Monitoring without opening the perimeter
Airgapped does not mean unobservable. We run a monitoring sidecar on the agent-internal network that collects metrics and logs from each container. The sidecar writes to a volume that is read by a separate metrics collector on a different network segment. There is no inbound connection to the agent containers and no outbound connection from them.
This gives us full visibility into agent behavior, token usage, error rates, and execution timing without compromising the network boundary.
What this costs
The overhead is real but manageable. Initial setup takes roughly two additional days compared to a naive deployment. The proxy adds 2-4ms of latency per LLM call. Image build cycles take longer because of scanning and signing. None of this matters when weighed against the alternative: an agent runtime that can be prompt-injected into exfiltrating data or reaching internal services.
We treat agent containers the way we would treat any process that executes untrusted input. Because that is exactly what they are.