feat(agents): DefaultAgentDispatch orchestrator wiring all four ports #518

Closed
opened 2026-04-28 09:24:51 +00:00 by claude-desktop · 0 comments
Collaborator

User story

As an upstream caller (flow-runner, webhook handler, foreman), I want a concrete AgentDispatchPort implementation that wires ClaudeAgentPort, ContainerLifecyclePort, McpRegistryPort, and a worker registry, so that dispatch is a single call.

Status — restart after closed PR #531 (2026-04-28)

First attempt (boss/518 → PR #531) was written in parallel with #514–#517 / #527 / #529 / #533, against earlier port shapes that did not survive review. PR closed, branch retained for reference. This issue restarts the work against main's canonical port shapes.

Canonical ports (read these first)

All three ports already exist on main. Use these shapes verbatim — do not redefine.

  • apps/server/src/domain/ports/agent-dispatch-port.tsAgentDispatchRequest { agentType, repo: AgentDispatchRepo, issueOrPr, prompt, forgeToken?, resumeSessionKey?, metadata }; AgentDispatchHandle { taskId, worker, containerId?, sessionKey }; AgentDispatchStatus is flat (state: "queued" | "running" | "succeeded" | "failed" | "cancelled", ISO timestamps, prUrl?); steer() and kill() return void.
  • apps/server/src/domain/ports/container-lifecycle-port.tsensureReady → ContainerHandle { mode, container, hostCwd, inContainerCwd, execPath }; buildExecArgs(handle, env); release(handle, reason).
  • apps/server/src/domain/ports/mcp-registry-port.tsserversFor(agent, RepoBinding) and allowedTools(agent, RepoBinding). No defaultToken accessor — adapters resolve the token from the agent internally.

Main today has a 31-line thin-wrapper apps/server/src/infrastructure/agents/default-agent-dispatch.ts that delegates to runAgentTask. This issue replaces that wrapper with the full orchestrator.

Acceptance criteria

Implementation

  • apps/server/src/infrastructure/agents/default-agent-dispatch.ts — full orchestrator replaces the thin wrapper
    • Constructor takes the four ports (claude, container, mcp, workerRegistry) + DB session store
    • dispatch() orchestrates: pre-flight container via ContainerLifecyclePort.ensureReady → resolve MCP servers via McpRegistryPort.serversFor → enqueue runner closure on WorkerRegistryPort → run via ClaudeAgentPort → persist task event stream
    • Per-call forgeToken override beats the per-agent default for both MCP servers and the env binding
    • steer() proxies to ClaudeAgentPort write-side handle keyed by taskId
    • kill() releases container, marks task killed, closes worker slot. Idempotent on already-settled tasks
    • status() lifts queued/running from a live table and settled outcomes from a bounded record
  • apps/server/src/domain/ports/worker-registry-port.ts — port + in-memory FIFO adapter. Salvage verbatim from closed boss/518 branch (9aaed6c) — reviewer flagged it as uncontroversial

Defensive

  • Bounded settled-task cache (reviewer #531 finding): the Map<string, SettledTask> backing status() must have an LRU cap (~1000 entries) — main's M25 service is long-running and cannot leak completed-task records until the DB lift lands
  • No domain-port file imports from infrastructure/*. McpServerSpec and friends live in domain/ports/ or @claude-hooks/shared

Equivalence

  • Behavioural parity with current runAgentTask() — same FIFO ordering, same (forge, agentType, repo, issueOrPr) session key, first-event session capture, container release in the runner's finally
  • NF-6 mutation logging untouched (forge calls still flow through adapter-factory.ts Proxy)

Tests

  • Integration test (in-memory ports): dispatch dev agent → events emitted, container released, worker freed, task marked succeeded
  • Integration test: per-call forgeToken override → MCP spec carries override, forge adapter constructed with override
  • Integration test: kill() mid-task → container released, task marked killed, second kill() is no-op
  • Integration test: steer() mid-task → message reaches the Claude write-side handle; post-settle steer() returns gracefully (does not throw)
  • Settled-cache LRU test: insert > cap, oldest entries evicted

Out of scope

  • Migrating existing runAgentTask() callers — already done in main via #533
  • New API surface on /task/:id/* endpoints
  • DB-backed status() (separate follow-up after this lands)

References

  • Closed PR #531 (boss/518, kept for reference)
  • Boss's rebase analysis: forge.jacquin.app/charles/claude-hooks/pulls/531#issuecomment-10269 .. -10274
  • Reviewer findings on #531: forge.jacquin.app/charles/claude-hooks/pulls/531#issuecomment-10268
  • Main's thin wrapper: apps/server/src/infrastructure/agents/default-agent-dispatch.ts (introduced by #533, 8cd16f0)
  • Canonical port files in apps/server/src/domain/ports/
## User story As an upstream caller (flow-runner, webhook handler, foreman), I want a concrete `AgentDispatchPort` implementation that wires `ClaudeAgentPort`, `ContainerLifecyclePort`, `McpRegistryPort`, and a worker registry, so that dispatch is a single call. ## Status — restart after closed PR #531 (2026-04-28) First attempt (`boss/518` → PR #531) was written in parallel with #514–#517 / #527 / #529 / #533, against earlier port shapes that did not survive review. PR closed, branch retained for reference. This issue restarts the work against main's canonical port shapes. ## Canonical ports (read these first) All three ports already exist on `main`. **Use these shapes verbatim — do not redefine.** - `apps/server/src/domain/ports/agent-dispatch-port.ts` — `AgentDispatchRequest { agentType, repo: AgentDispatchRepo, issueOrPr, prompt, forgeToken?, resumeSessionKey?, metadata }`; `AgentDispatchHandle { taskId, worker, containerId?, sessionKey }`; `AgentDispatchStatus` is flat (`state: "queued" | "running" | "succeeded" | "failed" | "cancelled"`, ISO timestamps, `prUrl?`); `steer()` and `kill()` return `void`. - `apps/server/src/domain/ports/container-lifecycle-port.ts` — `ensureReady → ContainerHandle { mode, container, hostCwd, inContainerCwd, execPath }`; `buildExecArgs(handle, env)`; `release(handle, reason)`. - `apps/server/src/domain/ports/mcp-registry-port.ts` — `serversFor(agent, RepoBinding)` and `allowedTools(agent, RepoBinding)`. **No `defaultToken` accessor** — adapters resolve the token from the agent internally. Main today has a 31-line thin-wrapper `apps/server/src/infrastructure/agents/default-agent-dispatch.ts` that delegates to `runAgentTask`. **This issue replaces that wrapper with the full orchestrator.** ## Acceptance criteria ### Implementation - [ ] `apps/server/src/infrastructure/agents/default-agent-dispatch.ts` — full orchestrator replaces the thin wrapper - [ ] Constructor takes the four ports (claude, container, mcp, workerRegistry) + DB session store - [ ] `dispatch()` orchestrates: pre-flight container via `ContainerLifecyclePort.ensureReady` → resolve MCP servers via `McpRegistryPort.serversFor` → enqueue runner closure on `WorkerRegistryPort` → run via `ClaudeAgentPort` → persist task event stream - [ ] Per-call `forgeToken` override beats the per-agent default for both MCP servers and the env binding - [ ] `steer()` proxies to `ClaudeAgentPort` write-side handle keyed by `taskId` - [ ] `kill()` releases container, marks task killed, closes worker slot. Idempotent on already-settled tasks - [ ] `status()` lifts queued/running from a live table and settled outcomes from a bounded record - [ ] `apps/server/src/domain/ports/worker-registry-port.ts` — port + in-memory FIFO adapter. **Salvage verbatim from closed `boss/518` branch (`9aaed6c`)** — reviewer flagged it as uncontroversial ### Defensive - [ ] **Bounded settled-task cache** (reviewer #531 finding): the `Map<string, SettledTask>` backing `status()` must have an LRU cap (~1000 entries) — main's M25 service is long-running and cannot leak completed-task records until the DB lift lands - [ ] No domain-port file imports from `infrastructure/*`. `McpServerSpec` and friends live in `domain/ports/` or `@claude-hooks/shared` ### Equivalence - [ ] Behavioural parity with current `runAgentTask()` — same FIFO ordering, same `(forge, agentType, repo, issueOrPr)` session key, first-event session capture, container release in the runner's `finally` - [ ] NF-6 mutation logging untouched (forge calls still flow through `adapter-factory.ts` Proxy) ### Tests - [ ] Integration test (in-memory ports): dispatch dev agent → events emitted, container released, worker freed, task marked succeeded - [ ] Integration test: per-call `forgeToken` override → MCP spec carries override, forge adapter constructed with override - [ ] Integration test: `kill()` mid-task → container released, task marked killed, second `kill()` is no-op - [ ] Integration test: `steer()` mid-task → message reaches the Claude write-side handle; post-settle `steer()` returns gracefully (does not throw) - [ ] Settled-cache LRU test: insert > cap, oldest entries evicted ## Out of scope - Migrating existing `runAgentTask()` callers — already done in main via #533 - New API surface on `/task/:id/*` endpoints - DB-backed `status()` (separate follow-up after this lands) ## References - Closed PR #531 (`boss/518`, kept for reference) - Boss's rebase analysis: forge.jacquin.app/charles/claude-hooks/pulls/531#issuecomment-10269 .. -10274 - Reviewer findings on #531: forge.jacquin.app/charles/claude-hooks/pulls/531#issuecomment-10268 - Main's thin wrapper: `apps/server/src/infrastructure/agents/default-agent-dispatch.ts` (introduced by #533, `8cd16f0`) - Canonical port files in `apps/server/src/domain/ports/`
Sign in to join this conversation.
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
charles/claude-hooks#518
No description provided.