Announcing our $20m Series A from GV (Google Ventures) and Workday Ventures Read More

Nicolas Belissent

Nicolas Belissent

AI Engineer

When Your AI Agent Needs Contractors, Not Apprentices: Building Async Subagents
| 8 min read

When Your AI Agent Needs Contractors, Not Apprentices: Building Async Subagents

The analogy is straightforward: hiring an apprentice who requires constant supervision versus hiring a contractor with specialized skills who works independently. StackOne’s Integration Builder Agent faced a similar challenge — certain tasks were blocking the main agent’s progress either through requiring human interaction or waiting for lengthy tool executions.

The Challenge

The Integration Builder Agent creates integration configurations that transform provider endpoints (like Hibob, Zendesk, HubSpot, Workday) into Unified APIs and AI Tools. Each configuration contains 20-100+ actions with detailed descriptions.

Three fundamental constraints emerged:

Context Window Limits: Generating, testing, and researching API actions consumed substantial context. Testing required passing entire configs (1K-10K tokens) repeatedly, with documentation research adding more strain.

Manual Subagent Orchestration: Claude Code supports subagents natively, but as “apprentices” — they use the parent agent’s tools but require manual user interaction within the same conversation thread.

Long-Running Jobs: Extended research tasks made manual orchestration impractical. The main agent couldn’t effectively wait while research progressed.

Approaches Evaluated

Claude Code Native Subagents

Strengths: Built-in functionality, separate context windows preserve parent agent state.

Weaknesses: No programmatic orchestration prevents fire-and-forget patterns. Subagents require manual triggering, rely on conversational flow, and lack persistent state between invocations.

This model suits human-in-the-loop workflows but doesn’t fit autonomous scenarios requiring long-running delegated research without intervention.

MCP Tools (HTTP-Based)

Initial production attempt involved exposing subagents as MCP tools via HTTP.

Strengths: Standard protocol, language-agnostic, integrates with existing MCP tooling.

Weaknesses: HTTP’s request-response architecture expects immediate results, unsuitable for extended jobs. MCP was designed for external tool servers, not in-process communication within infrastructure.

The Solution: Async Subagents with Native RPC

StackOne implemented specialist subagents as separate Durable Objects called via native Remote Procedure Call through an MCP tool. Each subagent operates independently with autonomous loops, making tool calls and research decisions while the Integration Builder continues other work.

How It Works: Native RPC on Cloudflare

Remote Procedure Call lets one program call a function in another program as if it were local code. Cloudflare’s Durable Object RPC uses internal communication protocols — when both Workers run in the same isolate, calls approach function calls with zero network latency.

Implementation steps:

  1. Configure a binding in wrangler.toml granting Integration Builder access to the subagent Durable Object
  2. Obtain a stub representing the remote subagent: env.SUBAGENT.get(providerId)
  3. Call methods as if local: stub.improveDescriptions({ config, provider })
  4. Receive a task ID immediately while the subagent queues work in background
  5. Poll for results when needed and integrate findings
  6. Cloudflare auto-evicts inactive Durable Objects from memory

Architecture

The Integration Builder delegates work via RPC, receives a task ID immediately, and continues working. The subagent runs autonomously — making its own tool calls, managing iterations, and determining research depth. Upon completion, the Integration Builder retrieves and integrates results.

In an example scenario, the Integration Builder generates a Zendesk config with 25 operations but generic descriptions. It delegates to the Improve Descriptions subagent, which autonomously researches each endpoint while the Integration Builder builds authentication configs and field mappings.

Key implementation details:

  • Deterministic IDs enable per-run state, caching, and rate limiting
  • Queue system via Cloudflare’s Agent SDK executes tasks in background
  • Bidirectional tool access flows one direction via RPC but returns via the Integration Builder’s MCP server for vector search and web research
  • Persistent state survives server restarts and hibernation

Implementation: Two-Tool Pattern

Two MCP tools expose functionality:

  1. improve_descriptions(config, provider) — Starts the task, returning { taskId } immediately
  2. get_improve_descriptions_task_status(taskId, provider) — Polls for status and retrieves results

The Integration Builder generates configurations, delegates improvement work, and continues building other components. The subagent autonomously researches operations: reading documentation, analyzing examples, making tool calls to the parent agent’s MCP server. The Integration Builder polls for results and integrates improved descriptions when complete.

Key Takeaways

For AI agent builders: Native RPC (direct method calls between Workers, not over HTTP) provides zero-latency communication with persistent state that HTTP-based abstractions cannot match.

The architecture demonstrates that Durable Objects enable genuine fire-and-forget delegation with persistent state and background execution — capabilities conversational tools like Claude Code subagents cannot provide.

Impact: Async subagents transformed the workflow from single-threaded blocking to parallel execution. The Integration Builder delegates complex tasks and continues building while research occurs autonomously in the background.

While future developments may bring streaming responses or protocol extensions, native RPC with Durable Object persistence currently offers reliable, scalable subagent delegation.