Workspaces
Understanding d:spatch workspaces — self-contained agent environments.
A workspace is the top-level organizational unit in d:spatch. It groups agents, configuration, environment variables, and Docker settings into a single deployable unit.
What is a workspace?
Every workspace maps to exactly one Docker container. All agents in the workspace share the same container, filesystem, and network. This means agents can read and write the same project files, communicate over localhost, and share installed tools.
Workspaces are defined in a dspatch.workspace.yml file. A minimal example:
name: my-project
workspace_dir: /home/user/projects/my-project
env:
ANTHROPIC_API_KEY: sk-ant-...
agents:
assistant:
template: claude-code
auto_start: trueDirectory mounting
The workspace_dir from your host machine is mounted at /workspace inside the container. Agents read and write project files relative to this path. Any changes agents make to files under /workspace are immediately visible on the host.
Shared filesystem
All agents in a workspace share the same /workspace mount. Coordinate file access through your agent hierarchy to avoid conflicts.
Workspace lifecycle
A workspace moves through these states:
- Created — the workspace configuration exists but no container is running.
- Started — the Docker container launches, agent templates are installed, and dependencies are resolved.
- Running — agents are active and processing tasks.
- Stopped — the container is shut down. If
home_persistenceis enabled, the agent's home directory is preserved for the next start.
Agent collaboration
Agents within a workspace can:
- Communicate with each other using
talk_to— send a message and wait for a response. - Escalate decisions through the inquiry system when they need human input or supervisor approval.
- Share the filesystem — all agents operate on the same
/workspacedirectory.
The workspace defines which agents can communicate with each other through the hierarchy (supervisor/sub-agent relationships) and peer declarations.
Workspace Configuration
The workspace config file must be named dspatch.workspace.yml. This is the only accepted filename — no alternatives or fallbacks are supported.
Full example
name: my-workspace
env:
ANTHROPIC_API_KEY: sk-...
IS_SANDBOX: "1"
agents:
lead:
template: claude-code
env: {}
auto_start: true
sub_agents:
coder:
template: claude-code
env: {}
sub_agents: {}
peers: [tester]
auto_start: false
tester:
template: claude-code
env: {}
sub_agents: {}
peers: [coder]
auto_start: false
peers: []
workspace_dir: /path/to/project
mounts:
- host_path: ~/.claude/.credentials.json
container_path: /root/.claude/.credentials.json
read_only: true
docker:
network_mode: host
ports: []
gpu: false
home_persistence: trueTop-level fields
| Field | Type | Required | Description |
|---|---|---|---|
name | string | yes | Display name for the workspace. |
env | map | no | Global environment variables available to all agents. |
agents | map | yes | Agent hierarchy definition. Keys are instance names, values are agent configs. |
workspace_dir | string | yes | Host directory mounted at /workspace inside the container. |
mounts | list | no | Additional bind mounts from host to container. |
docker | map | no | Container resource and runtime settings. See Docker Settings below. |
Agent fields
Each entry in the agents map configures a single agent instance.
| Field | Type | Required | Default | Description |
|---|---|---|---|---|
template | string | yes | — | Name of the agent template to use. |
env | map | no | {} | Agent-specific environment variable overrides. |
sub_agents | map | no | {} | Nested supervised agents. Uses the same schema recursively. |
peers | list | no | [] | Keys of agents this instance can communicate with laterally. |
auto_start | boolean | no | false | Whether the agent starts automatically when the workspace launches. |
instances | integer | no | 1 | Number of parallel instances to run from this template. |
Environment variable precedence
Environment variables set at the agent level override workspace-level values. System variables prefixed with DSPATCH_ are reserved and cannot be overridden.
Mount fields
Each entry in the mounts list binds a host path into the container.
| Field | Type | Required | Description |
|---|---|---|---|
host_path | string | yes | Absolute path on the host machine. Tilde (~) expansion is supported. |
container_path | string | yes | Absolute path inside the container. |
read_only | boolean | no | If true, the mount is read-only inside the container. Defaults to false. |
Peer communication
Peers enable lateral communication between agents at the same level of the hierarchy. In the example above, coder and tester are peers — either can send messages to the other using talk_to. Peer declarations must be mutual: if coder lists tester as a peer, tester must also list coder.
Supervisor-to-sub-agent communication is always available by default and does not require peer declarations.
Docker Settings
The docker section in dspatch.workspace.yml controls container resources and runtime behavior for the workspace.
Field reference
| Field | Type | Default | Description |
|---|---|---|---|
network_mode | string | "host" | Docker network mode. |
ports | list | [] | Port mappings in host:container format. |
gpu | boolean | false | Enable NVIDIA GPU passthrough. |
home_persistence | boolean | true | Persist /root across container restarts. |
home_size | string | — | Size limit for the home volume (e.g., "10g"). |
memory_limit | string | — | Container memory limit (e.g., "8g"). |
cpu_limit | string | — | CPU core limit (e.g., "4"). |
Example configurations
docker:
gpu: false
home_persistence: trueUses host networking with no resource limits. Suitable for most development workflows.
docker:
gpu: true
memory_limit: "16g"
home_persistence: trueEnables GPU access and sets a 16 GB memory ceiling. Use this for agents that run local inference or GPU-accelerated tasks.
docker:
network_mode: bridge
ports:
- "8080:8080"
- "3000:3000"
home_persistence: trueIsolates the container network. Only the listed ports are accessible from the host.
GPU passthrough
When gpu: true, the container starts with access to all NVIDIA GPUs on the host via the NVIDIA Container Toolkit. The runtime automatically installs pynvml inside the container for GPU monitoring and diagnostics.
GPU requirements
GPU passthrough requires the NVIDIA Container Toolkit to be installed on the host. See the NVIDIA documentation for installation instructions.
Home persistence
When home_persistence is enabled, the /root directory is stored in a Docker volume that survives container restarts. This preserves:
- Installed tools and binaries
- Shell configuration (
.bashrc,.zshrc) - Package manager caches
- Command history
Disabling home persistence means every restart begins with a clean /root. This can be useful for reproducible environments but increases startup time as tools must be reinstalled.
Network modes
| Mode | Behavior |
|---|---|
host | The container shares the host's network stack. All host ports are directly accessible. No port mapping is needed. |
bridge | The container runs on an isolated network. Use the ports field to expose specific ports to the host. |
Host mode is the simplest option and avoids networking issues. Bridge mode provides better isolation and is recommended when running untrusted workloads or when port conflicts are a concern.
Resource limits
The memory_limit and cpu_limit fields map directly to Docker's --memory and --cpus flags. If unset, the container can use all available host resources.
docker:
memory_limit: "8g"
cpu_limit: "4"This limits the container to 8 GB of RAM and 4 CPU cores.