OpenAiAgentContext
Integrate OpenAI Agents SDK with d:spatch.
OpenAiAgentContext integrates with the OpenAI Agents SDK (openai-agents package).
It builds an Agent with d:spatch platform tools injected as FunctionTool objects, runs prompts
via Runner.run_streamed(), and bridges streaming events — text deltas, tool calls, reasoning,
and usage — back to the d:spatch app in real time.
Authentication
The OpenAI Agents SDK reads OPENAI_API_KEY from the environment. Declare it in your agent template
so that d:spatch prompts the user to supply a value during workspace creation:
required_env:
- OPENAI_API_KEYFull example
from dspatch import OpenAiAgentContext, DspatchEngine
dspatch = DspatchEngine()
@dspatch.agent(OpenAiAgentContext)
async def my_agent(prompt: str, ctx: OpenAiAgentContext):
ctx.setup(
system_prompt="You are a helpful coding assistant.",
authority=(
"You may freely refactor code, fix bugs, and write tests. "
"You must escalate any changes to the public API surface, "
"database schema migrations, and dependency upgrades."
),
)
async with ctx:
while True:
try:
await ctx.run(prompt)
except Exception as e:
ctx.log(f"Error: {e}", level="error")
prompt = yield
if prompt is None:
break
dspatch.run()Lifecycle
ctx.setup()
Stores your system prompt, authority boundaries, and options. Call once before
entering the context manager. Set options.model to choose the model (default: gpt-4o).
async with ctx:
Builds an OpenAI Agents SDK Agent with your augmented system prompt and d:spatch
platform tools as FunctionTool objects. The system prompt is extended with workspace
paths, inquiry tools, and coordination tools — identical to ClaudeAgentContext.
await ctx.run(prompt)
Calls Runner.run_streamed() and iterates the event stream. Text is streamed
token-by-token. Tool calls and reasoning events are logged as activities. Token
usage is recorded automatically. Multi-turn continuity is maintained via
previous_response_id.
prompt = yield
Same generator pattern as ClaudeAgentContext. Suspends execution and waits
for the next user message. Yield None to end the session.
Custom model
Pass a model name via the options parameter. The default is gpt-4o.
from dataclasses import dataclass
@dataclass
class Options:
model: str = "gpt-4o"
ctx.setup(
system_prompt="You are a helpful assistant.",
options=Options(model="gpt-4.1-2025-04-14"),
)from collections import namedtuple
Options = namedtuple("Options", ["model"])
ctx.setup(
system_prompt="You are a helpful assistant.",
options=Options(model="o3-mini"),
)Custom endpoint
For OpenAI-compatible APIs (local models, proxies, etc.), set ctx.client to an AsyncOpenAI
instance inside the context manager. The context wraps it in an OpenAIChatCompletionsModel
automatically:
from openai import AsyncOpenAI
async with ctx:
ctx.client = AsyncOpenAI(
base_url="http://localhost:8000/v1",
api_key="not-needed",
)
await ctx.run(prompt)Compatible APIs
Any API that implements the OpenAI chat completions interface (including function calling) works with this approach — vLLM, Ollama, LiteLLM, Azure OpenAI, and others.
System prompt & authority
System prompt augmentation and the authority system work identically to
ClaudeAgentContext. Your prompt
is extended with workspace instructions, inquiry tools, and coordination tools. The authority
parameter controls what the agent may decide autonomously versus what it must escalate.