d:spatch logodocs

Workspaces

Understanding d:spatch workspaces — self-contained agent environments.

A workspace is the top-level organizational unit in d:spatch. It groups agents, configuration, environment variables, and Docker settings into a single deployable unit.

What is a workspace?

Every workspace maps to exactly one Docker container. All agents in the workspace share the same container, filesystem, and network. This means agents can read and write the same project files, communicate over localhost, and share installed tools.

Workspaces are defined in a dspatch.workspace.yml file. A minimal example:

dspatch.workspace.yml
name: my-project
workspace_dir: /home/user/projects/my-project
env:
  ANTHROPIC_API_KEY: sk-ant-...
agents:
  assistant:
    template: claude-code
    auto_start: true

Directory mounting

The workspace_dir from your host machine is mounted at /workspace inside the container. Agents read and write project files relative to this path. Any changes agents make to files under /workspace are immediately visible on the host.

Shared filesystem

All agents in a workspace share the same /workspace mount. Coordinate file access through your agent hierarchy to avoid conflicts.

Workspace lifecycle

A workspace moves through these states:

  1. Created — the workspace configuration exists but no container is running.
  2. Started — the Docker container launches, agent templates are installed, and dependencies are resolved.
  3. Running — agents are active and processing tasks.
  4. Stopped — the container is shut down. If home_persistence is enabled, the agent's home directory is preserved for the next start.

Agent collaboration

Agents within a workspace can:

  • Communicate with each other using talk_to — send a message and wait for a response.
  • Escalate decisions through the inquiry system when they need human input or supervisor approval.
  • Share the filesystem — all agents operate on the same /workspace directory.

The workspace defines which agents can communicate with each other through the hierarchy (supervisor/sub-agent relationships) and peer declarations.

Workspace Configuration

The workspace config file must be named dspatch.workspace.yml. This is the only accepted filename — no alternatives or fallbacks are supported.

Full example

dspatch.workspace.yml
name: my-workspace
env:
  ANTHROPIC_API_KEY: sk-...
  IS_SANDBOX: "1"
agents:
  lead:
    template: claude-code
    env: {}
    auto_start: true
    sub_agents:
      coder:
        template: claude-code
        env: {}
        sub_agents: {}
        peers: [tester]
        auto_start: false
      tester:
        template: claude-code
        env: {}
        sub_agents: {}
        peers: [coder]
        auto_start: false
    peers: []
workspace_dir: /path/to/project
mounts:
  - host_path: ~/.claude/.credentials.json
    container_path: /root/.claude/.credentials.json
    read_only: true
docker:
  network_mode: host
  ports: []
  gpu: false
  home_persistence: true

Top-level fields

FieldTypeRequiredDescription
namestringyesDisplay name for the workspace.
envmapnoGlobal environment variables available to all agents.
agentsmapyesAgent hierarchy definition. Keys are instance names, values are agent configs.
workspace_dirstringyesHost directory mounted at /workspace inside the container.
mountslistnoAdditional bind mounts from host to container.
dockermapnoContainer resource and runtime settings. See Docker Settings below.

Agent fields

Each entry in the agents map configures a single agent instance.

FieldTypeRequiredDefaultDescription
templatestringyesName of the agent template to use.
envmapno{}Agent-specific environment variable overrides.
sub_agentsmapno{}Nested supervised agents. Uses the same schema recursively.
peerslistno[]Keys of agents this instance can communicate with laterally.
auto_startbooleannofalseWhether the agent starts automatically when the workspace launches.
instancesintegerno1Number of parallel instances to run from this template.

Environment variable precedence

Environment variables set at the agent level override workspace-level values. System variables prefixed with DSPATCH_ are reserved and cannot be overridden.

Mount fields

Each entry in the mounts list binds a host path into the container.

FieldTypeRequiredDescription
host_pathstringyesAbsolute path on the host machine. Tilde (~) expansion is supported.
container_pathstringyesAbsolute path inside the container.
read_onlybooleannoIf true, the mount is read-only inside the container. Defaults to false.

Peer communication

Peers enable lateral communication between agents at the same level of the hierarchy. In the example above, coder and tester are peers — either can send messages to the other using talk_to. Peer declarations must be mutual: if coder lists tester as a peer, tester must also list coder.

Supervisor-to-sub-agent communication is always available by default and does not require peer declarations.

Docker Settings

The docker section in dspatch.workspace.yml controls container resources and runtime behavior for the workspace.

Field reference

FieldTypeDefaultDescription
network_modestring"host"Docker network mode.
portslist[]Port mappings in host:container format.
gpubooleanfalseEnable NVIDIA GPU passthrough.
home_persistencebooleantruePersist /root across container restarts.
home_sizestringSize limit for the home volume (e.g., "10g").
memory_limitstringContainer memory limit (e.g., "8g").
cpu_limitstringCPU core limit (e.g., "4").

Example configurations

dspatch.workspace.yml
docker:
  gpu: false
  home_persistence: true

Uses host networking with no resource limits. Suitable for most development workflows.

dspatch.workspace.yml
docker:
  gpu: true
  memory_limit: "16g"
  home_persistence: true

Enables GPU access and sets a 16 GB memory ceiling. Use this for agents that run local inference or GPU-accelerated tasks.

dspatch.workspace.yml
docker:
  network_mode: bridge
  ports:
    - "8080:8080"
    - "3000:3000"
  home_persistence: true

Isolates the container network. Only the listed ports are accessible from the host.

GPU passthrough

When gpu: true, the container starts with access to all NVIDIA GPUs on the host via the NVIDIA Container Toolkit. The runtime automatically installs pynvml inside the container for GPU monitoring and diagnostics.

GPU requirements

GPU passthrough requires the NVIDIA Container Toolkit to be installed on the host. See the NVIDIA documentation for installation instructions.

Home persistence

When home_persistence is enabled, the /root directory is stored in a Docker volume that survives container restarts. This preserves:

  • Installed tools and binaries
  • Shell configuration (.bashrc, .zshrc)
  • Package manager caches
  • Command history

Disabling home persistence means every restart begins with a clean /root. This can be useful for reproducible environments but increases startup time as tools must be reinstalled.

Network modes

ModeBehavior
hostThe container shares the host's network stack. All host ports are directly accessible. No port mapping is needed.
bridgeThe container runs on an isolated network. Use the ports field to expose specific ports to the host.

Host mode is the simplest option and avoids networking issues. Bridge mode provides better isolation and is recommended when running untrusted workloads or when port conflicts are a concern.

Resource limits

The memory_limit and cpu_limit fields map directly to Docker's --memory and --cpus flags. If unset, the container can use all available host resources.

docker:
  memory_limit: "8g"
  cpu_limit: "4"

This limits the container to 8 GB of RAM and 4 CPU cores.

On this page