AI coding agents are powerful tools because they can perform tasks directly on the developer’s workstation. This extensive access is also what makes them risky. To mitigate some of those risks, I’ve started running Claude Code inside an isolated Docker container, which reduces the attack surface and blast radius should things go wrong.

The complete setup consists of a management script, a Dockerfile template, and a claude.json template (links below).
My isolation strategy uses “profiles”, which are Claude configurations stored on the host machine and mounted for each isolated container instance to enable persistent configuration (including authentication).
In addition to the profile configuration, the current working directory is mounted into the container to allow access to the project repository.

When launching a container with :cla, the script does the following steps:
claude.json templates to the configuration directory if they don’t exist--build argument is given--rm (remove container after stopped)zsh inside the container:cla lives inside my dotfiles repository, which is open-sourced under the MIT-license.
~/.config/cla/
No access to the full file system.
When I start a container with :cla, only the current working directory is mounted into the container, along with some persistent configuration.
Thus, even if Claude Code’s safety mechanisms fail, it’s not possible to reach other parts of the file system from inside the container.
For example, if the agent runs an erroneous rm command, the blast radius is limited to the current directory.
Without isolation, the complete file system is vulnerable.
I’m especially happy that SSH keys aren’t available in the container, as this has two positive consequences. First, it renders the agent incapable of pushing code to remote git repositories. Second, it drastically reduces the attack surface because there are no keys to steal.
Only the current directory is available.
This is the same benefit as above, but from a different perspective.
When I know that only the current working directory is available, I can let Claude Code use find, grep, and similar commands to freely explore the codebase without the risk of encountering projects or files unrelated to current tasks.
To work across several related repositories at once, I can group them into a single parent directory and run :cla with the --shell option from that location. This way, all directories are available in the container, and I can navigate to the correct repository with cd before starting claude. This ensures that Claude Code can load the repository’s configuration correctly while also having access to the related repositories. The shared parent directory setup works well for multi-repo projects or any other scenario where a single repository doesn’t provide enough context.
Different credentials for each profile.
Each profile needs to be authenticated separately.
This is perfect for consultants and others who might need to use different accounts for different projects.
Customizable profiles.
Dockerfiles are per-profile and can be customized to include custom tooling.
Throwaway containers.
Each container is single-use, ensuring a fresh state each time one is started.
Disposability has multiple benefits, stemming from the fact that the container state is defined in code (Dockerfile and :cla), meaning there’s no room for configuration drift or for malware hiding outside the mounted directories.
It’s clumsy at times.
Claude wants to use a tool that doesn’t exist in the environment, an MCP authentication callback requires manual curling due to the network isolation, the agent tries to read a sibling project on purpose, but can’t…
Most issues I’ve encountered have been straightforward to fix, and I don’t have any daily annoyances, but it’s worth noting that the isolation layer adds some friction.
It’s a good idea to add a simple rule explaining that the tool is running inside a container, meaning it doesn’t have access to SSH keys and thus can’t push or fetch code, or that it shouldn’t start development servers since they are inaccessible from the host machine.
User errors.
If an instance is started directly from the home directory, the first two benefits are essentially voided.
It’s a specialized tool.
I made this tool specifically for my own needs, so it most likely doesn’t fit everyone else’s.
For example, I’m using openSUSE Tumbleweed Docker image as the base. It is not a standard choice like Alpine or Debian, but I’m familiar with Tumbleweed, and being a rolling-release distribution, it provides access to the latest development tools.
The specialization also means that the tool is integrated into my dotfiles. To use :cla standalone, three files need to be downloaded and placed in their respective locations.
It’s not a silver bullet to achieve full security.
I hope this goes without saying, but this only reduces some of the risks associated with using AI coding agents.
The isolation is not complete, and the setup is highly vulnerable to many attacks that could expose the project’s code or even install malware in the project directory, potentially gaining access to the host system.
For example, if a Node.js project’s start script definition under package.json is modified to run malware, and the project is then started on the host machine, the container boundary is effectively bypassed.
Running Claude Code with :cla daily has significantly improved my security posture. Isolating Claude Code is especially vital for my consulting work, where I need to maintain strict boundaries between multiple clients’ codebases on a single machine.
My goal for this post was to give some thoughts and practical insights for working more safely with AI. I hope my setup will be a useful reference for anyone looking to build similar isolation mechanisms to harness the power of AI agents without opening the floodgates.
I'll announce new posts in the following channels:
See my blog's front page to read my other posts.
You can reach me on Mastodon: @sampo@hachyderm.io. I'd love to hear from you!