Remote Agents (Temporal + HTTP)
This example demonstrates a Temporal orchestrator agent delegating to remote specialist agents running on a separate HTTP service. It shows:
- Cross-runtime orchestration (Temporal orchestrator + JS runtime service)
HttpRemoteAgentTransportfor HTTP/SSE communicationAgentServerwith Express for hosting remote agentscreateRemoteSubAgentTool()for transparent remote delegation
Source Code
The full example is in examples/remote-agents-temporal/.
Architecture
graph LR
Client["Client<br/>(starts workflow)"]
Worker["Temporal Worker<br/>Orchestrator Agent"]
Service["Express Service<br/>AgentServer"]
Researcher["Researcher Agent"]
Summarizer["Summarizer Agent"]
Client -->|"Temporal workflow"| Worker
Worker -->|"HTTP + SSE"| Service
Service --> Researcher
Service --> SummarizerThe orchestrator runs on Temporal for durable execution and crash recovery. The researcher and summarizer run on a lightweight Express service using AgentServer. Communication uses HttpRemoteAgentTransport (HTTP for requests, SSE for streaming).
Prerequisites
- Node.js 18+
- Docker (for Temporal and Redis)
- OpenAI API key
Project Structure
examples/remote-agents-temporal/
├── src/
│ ├── agents/
│ │ ├── orchestrator.ts # Parent agent with remote sub-agent tools
│ │ ├── researcher.ts # Specialist agent (web search + notes)
│ │ └── summarizer.ts # Specialist agent (pure LLM, no tools)
│ ├── types.ts # Shared Zod schemas
│ ├── server.ts # Express server hosting agents
│ ├── workflows.ts # Temporal workflow
│ ├── activities.ts # Temporal activities
│ ├── worker.ts # Temporal worker entry point
│ └── client.ts # Client that starts the workflow
├── docker-compose.yml
└── package.jsonRunning the Example
1. Install Dependencies
cd examples/remote-agents-temporal
npm install2. Set Up Environment
cp .env.example .env
# Edit .env and add your OPENAI_API_KEY3. Start Infrastructure
npm run docker:upThis starts Temporal on localhost:7233.
4. Start the Remote Agent Service
# Terminal 1
npm run serverThe service starts on http://localhost:4000 with two agents:
researcher— Searches for information and takes notessummarizer— Summarizes text into key points
5. Start the Temporal Worker
# Terminal 2
npm run worker6. Run the Client
# Terminal 3
npm run client "benefits of TypeScript"Key Components
Remote Agent Service
The server uses AgentServer to host specialist agents:
// src/server.ts
import express from 'express';
import { AgentServer, createHttpAdapter, createExpressAdapter } from '@helix-agents/agent-server';
import { JSAgentExecutor } from '@helix-agents/runtime-js';
import { VercelAIAdapter } from '@helix-agents/llm-vercel';
import { InMemoryStateStore, InMemoryStreamManager } from '@helix-agents/store-memory';
const stateStore = new InMemoryStateStore();
const streamManager = new InMemoryStreamManager();
const executor = new JSAgentExecutor(stateStore, streamManager, new VercelAIAdapter());
const agentServer = new AgentServer({
agents: {
researcher: ResearcherAgent,
summarizer: SummarizerAgent,
},
stateStore,
streamManager,
executor,
});
const app = express();
app.use(express.json());
app.use('/', createExpressAdapter(createHttpAdapter(agentServer)));
app.listen(4000);This exposes 6 endpoints: /start, /resume, /sse, /status, /interrupt, /abort.
Orchestrator Agent
The orchestrator uses createRemoteSubAgentTool to delegate to remote agents:
// src/agents/orchestrator.ts
import {
defineAgent,
createRemoteSubAgentTool,
HttpRemoteAgentTransport,
} from '@helix-agents/core';
const transport = new HttpRemoteAgentTransport({
url: process.env.REMOTE_AGENT_URL || 'http://localhost:4000',
});
const researcherTool = createRemoteSubAgentTool('researcher', {
description: 'Delegate research to a remote specialist agent',
inputSchema: z.object({
query: z.string().describe('The research query'),
}),
outputSchema: ResearcherOutputSchema,
transport,
remoteAgentType: 'researcher',
timeoutMs: 120_000,
});
const summarizerTool = createRemoteSubAgentTool('summarizer', {
description: 'Delegate summarization to a remote specialist agent',
inputSchema: z.object({
text: z.string().describe('The text to summarize'),
}),
outputSchema: SummarizerOutputSchema,
transport,
remoteAgentType: 'summarizer',
timeoutMs: 60_000,
});
export const OrchestratorAgent = defineAgent({
name: 'orchestrator',
outputSchema: OrchestratorOutputSchema,
tools: [researcherTool, summarizerTool],
systemPrompt: `You are a research orchestrator.
1. Use the researcher to gather information
2. Use the summarizer to distill findings
3. Call __finish__ with your final output`,
llmConfig: { model: openai('gpt-4o-mini') },
maxSteps: 10,
});Specialist Agents
The researcher agent uses tools (web search, note-taking):
// src/agents/researcher.ts
export const ResearcherAgent = defineAgent({
name: 'researcher',
stateSchema: ResearcherStateSchema,
outputSchema: ResearcherOutputSchema,
tools: [webSearchTool, takeNotesTool],
systemPrompt: (state) => `You are a research specialist...`,
llmConfig: { model: openai('gpt-4o-mini') },
maxSteps: 10,
});The summarizer is a pure LLM agent (no tools):
// src/agents/summarizer.ts
export const SummarizerAgent = defineAgent({
name: 'summarizer',
outputSchema: SummarizerOutputSchema,
tools: [],
systemPrompt: `You are a summarization expert...`,
llmConfig: { model: openai('gpt-4o-mini') },
maxSteps: 5,
});Shared Schemas
Output schemas are shared between the orchestrator and the remote service:
// src/types.ts
export const ResearcherOutputSchema = z.object({
findings: z.array(
z.object({
title: z.string(),
snippet: z.string(),
url: z.string(),
})
),
rawNotes: z.array(z.string()),
});
export const SummarizerOutputSchema = z.object({
keyPoints: z.array(z.string()),
summary: z.string(),
});
export const OrchestratorOutputSchema = z.object({
topic: z.string(),
researchFindings: z.array(z.string()),
summary: z.string(),
sources: z.array(z.string()),
});Execution Flow
- The client starts a Temporal workflow for the orchestrator
- The orchestrator LLM calls
subagent__researcherwith a query - The Temporal workflow routes the call to a dedicated
executeRemoteSubAgentCallactivity that callsPOST /starton the remote service, then consumesGET /ssewith crash recovery and stream proxying - The researcher runs independently (web search, note-taking), returns structured output
- The orchestrator LLM calls
subagent__summarizerwith the findings - The summarizer returns key points and a summary
- The orchestrator calls
__finish__with the final structured output
Production Considerations
- Replace
InMemoryStateStorewithRedisStateStoreon the agent service - Replace
InMemoryStreamManagerwithRedisStreamManageron the agent service - Add authentication headers to the transport
- Set appropriate
timeoutMsvalues based on expected agent execution times - Use Temporal Cloud for production workflow execution
Next Steps
- Remote Agents Guide — Full guide with patterns and configuration
- API Reference — AgentServer and transport API
- Temporal Runtime — Temporal runtime reference
- Sub-Agents Guide — Local sub-agent orchestration