JavaScript Runtime
The JavaScript runtime (@helix-agents/runtime-js) executes agents in-process within your Node.js application. It's the simplest runtime to set up and ideal for development, testing, and simple deployments.
When to Use
Good fit:
- Local development and testing
- Prototyping and experimentation
- Single-process deployments
- Short-lived agent executions (< 30 minutes)
- Serverless functions (Lambda, Cloud Functions)
Not ideal for:
- Long-running agents that may outlive the process
- Production workloads requiring crash recovery
- Multi-process distributed systems
Installation
npm install @helix-agents/runtime-js @helix-agents/store-memoryOr use the SDK which bundles everything:
npm install @helix-agents/sdkBasic Setup
import { JSAgentExecutor, InMemoryStateStore, InMemoryStreamManager } from '@helix-agents/sdk';
import { VercelAIAdapter } from '@helix-agents/llm-vercel';
// Create stores
const stateStore = new InMemoryStateStore();
const streamManager = new InMemoryStreamManager();
const llmAdapter = new VercelAIAdapter();
// Create executor
const executor = new JSAgentExecutor(stateStore, streamManager, llmAdapter);Constructor
new JSAgentExecutor(
stateStore: StateStore,
streamManager: StreamManager,
llmAdapter: LLMAdapter
)Parameters:
stateStore- Where agent state is persisted (Memory, Redis)streamManager- How events are streamed (Memory, Redis)llmAdapter- LLM provider adapter (Vercel, Custom)
Executing Agents
Basic Execution
const handle = await executor.execute(MyAgent, 'Research the benefits of TypeScript');
// Stream events
const stream = await handle.stream();
if (stream) {
for await (const chunk of stream) {
if (chunk.type === 'text_delta') {
process.stdout.write(chunk.delta);
}
}
}
// Get result
const result = await handle.result();
console.log(result.output);With Multiple Messages
Send multiple user messages in a single execution. This is useful for injecting context alongside a question, providing system-generated context, or batching messages from an async channel:
const handle = await executor.execute(
MyAgent,
{
message: [
{
role: 'user',
content: 'Background: user is on the enterprise plan',
metadata: { source: 'system' },
},
{ role: 'user', content: 'What features do I have access to?' },
],
},
{ sessionId: 'my-session' }
);Each message in the array is persisted individually and visible to the LLM. You can attach files to any message:
const handle = await executor.execute(
MyAgent,
{
message: [
{
role: 'user',
content: 'Analyze this screenshot',
files: [
{
data: 'data:image/png;base64,iVBORw0KGgo...',
mediaType: 'image/png',
filename: 'screenshot.png',
},
],
},
],
},
{ sessionId: 'my-session' }
);String and multi-message inputs can be mixed freely across turns in the same session.
With Initial State
const handle = await executor.execute(MyAgent, {
message: 'Continue the research',
state: {
previousFindings: ['Finding 1', 'Finding 2'],
phase: 'analyzing',
},
});With Options
const handle = await executor.execute(MyAgent, 'Research topic', {
sessionId: 'my-session-id', // Session ID for conversation continuity
parentStreamId: 'parent-stream', // For sub-agent streaming
parentSessionId: 'parent-session-id', // Parent session reference (for sub-agents)
});With Conversation History
When you manage your own message history externally:
const handle = await executor.execute(MyAgent, {
message: 'Continue from here',
messages: [
{ role: 'user', content: 'Previous question' },
{ role: 'assistant', content: 'Previous answer' },
],
});Execution Handle
The handle returned from execute() provides these methods:
stream()
Get an async iterable of stream chunks:
const stream = await handle.stream();
if (stream) {
for await (const chunk of stream) {
console.log(chunk.type, chunk);
}
}Returns null if streaming is not available.
result()
Wait for and get the final result:
const result = await handle.result();
if (result.status === 'completed') {
console.log('Output:', result.output);
} else {
console.log('Failed:', result.error);
}abort(reason?)
Cancel the agent execution:
await handle.abort('User requested cancellation');The agent checks the abort signal between steps and during tool execution.
getState()
Get current agent state:
const state = await handle.getState();
console.log('Step count:', state.stepCount);
console.log('Messages:', state.messages.length);
console.log('Custom state:', state.customState);canResume()
Check if the agent can be resumed:
const { canResume, reason } = await handle.canResume();
if (canResume) {
const newHandle = await handle.resume();
}resume()
Resume a paused or interrupted agent:
const { canResume, reason } = await handle.canResume();
if (canResume) {
const newHandle = await handle.resume();
const result = await newHandle.result();
}retry()
Retry a failed execution:
const result = await handle.result();
if (result.status === 'failed') {
const retryHandle = await handle.retry();
const retryResult = await retryHandle.result();
}Options:
mode: 'from_checkpoint'(default) - Restore from checkpointmode: 'from_start'- Clear state and start freshcheckpointId- Specific checkpoint to restore frommessage- Replacement message
send()
Continue the conversation with another message. This is syntactic sugar for continuing the conversation in the same session:
// Simple string input (becomes user message)
const handle2 = await handle1.send('Tell me more about that');
const result = await handle2.result();
// Message array input (for advanced use cases)
const handle2 = await handle1.send([
{ role: 'user', content: 'Here is some context' },
{ role: 'user', content: 'Now my actual question' },
]);
// With state override
const handle2 = await handle1.send('Continue', { state: { mood: 'curious' } });State inheritance: Both string and Message[] inputs inherit state from the source run when no explicit state is provided. Use the state option to override.
Important: send() waits for the current execution to complete before starting the new one. If you need parallel conversations, create separate handles via executor.execute().
Reconnecting to Sessions
Use getHandle() to reconnect to an existing session:
// Get handle for existing session
const handle = await executor.getHandle(MyAgent, 'session-123');
if (handle) {
// Check if we can resume
const { canResume, reason } = await handle.canResume();
if (canResume) {
// Resume execution
const resumedHandle = await handle.resume();
const result = await resumedHandle.result();
} else {
// Get completed result
const result = await handle.result();
}
}The sessionId is the primary identifier for conversation continuity. Persist it to reconnect to conversations across server restarts or process boundaries.
Multi-Turn Conversations
Enable conversation continuation where each message builds on the previous exchange using the session-centric model.
Using sessionId
Pass the same sessionId to continue a conversation within the same session:
// First message - creates a new session
const handle1 = await executor.execute(agent, 'Hello, my name is Alice', {
sessionId: 'session-123',
});
await handle1.result();
// Continue the conversation - agent remembers the name
const handle2 = await executor.execute(agent, 'What is my name?', {
sessionId: 'session-123', // Same session continues the conversation
});
const result = await handle2.result();
// Agent responds: "Your name is Alice"Using handle.send()
Syntactic sugar for continuation within the same session:
const handle1 = await executor.execute(agent, 'Hello, my name is Alice', {
sessionId: 'session-123',
});
await handle1.result();
// Equivalent to execute() with same sessionId
const handle2 = await handle1.send('What is my name?');
const result = await handle2.result();Using Direct Messages
When you manage your own message history in an external database:
const handle = await executor.execute(agent, {
message: 'What is my name?',
messages: [
{ role: 'user', content: 'Hello, my name is Alice' },
{ role: 'assistant', content: 'Hello Alice! How can I help you today?' },
],
});This is useful when:
- You store conversation history in your own database
- You want full control over what context the agent sees
- You're building chat features outside the framework's state store
Note: System messages in messages are filtered out and re-added dynamically by the agent.
Input Formats
The message field accepts two formats:
string- Simple text message (sugar for a single user message)UserInputMessage[]- Multiple user messages, each with optionalmetadataandfiles
// String shorthand
await executor.execute(agent, 'Hello');
// Structured with multi-message
await executor.execute(agent, {
message: [
{ role: 'user', content: 'System context', metadata: { hidden: true } },
{ role: 'user', content: 'User question' },
],
});Behavior Table
Both messages and state have override semantics when combined with existing session state:
| Input | Messages Source | State Source |
|---|---|---|
message only (new session) | Empty (fresh) | Empty (fresh) |
message + sessionId (existing) | From session | From session |
message + messages | From messages | Empty (fresh) |
message + state | Empty (fresh) | From state |
message + sessionId + messages | From messages (override) | From session |
message + sessionId + state | From session | From state (override) |
| All four | From messages (override) | From state (override) |
Key points:
messagecan be astringorUserInputMessage[]in all rows above- Sessions contain all messages and state for a conversation
- Each execution creates a new run within the session (for debugging, billing, tracing)
messages(conversation history) overrides history from session when both are providedstateoverrides state from session when both are provided- Non-existent sessions are automatically created on first message
Branching Conversations
Use the branch option to create a new session from an existing checkpoint:
// Create a new session branching from an existing checkpoint
const handle = await executor.execute(agent, 'What if we tried a different approach?', {
sessionId: 'new-session-456',
branch: { fromSessionId: 'session-123', checkpointId: 'cp_abc' },
});
// new-session-456 starts with state from checkpoint cp_abcMethod Comparison
| Method | Purpose | Stream Behavior | Valid From Status |
|---|---|---|---|
execute() | New/continue conversation | Resets stream | Any except running |
resume() | Continue after interrupt | Preserves stream | interrupted, paused |
retry() | Recover from failure | Resets to checkpoint | failed |
Common Patterns
// Multi-turn conversation
const h1 = await executor.execute(agent, 'Hello', { sessionId: 'chat-1' });
await h1.result();
const h2 = await executor.execute(agent, 'Tell me more', { sessionId: 'chat-1' });
// Interrupt and resume
const h = await executor.execute(agent, 'Long task', { sessionId: 'task-1' });
await h.interrupt();
const resumed = await h.resume();
// Retry after failure
const h = await executor.execute(agent, 'Risky task', { sessionId: 'task-2' });
const result = await h.result();
if (result.status === 'failed') {
const retried = await h.retry();
}Concurrency Protection
The JS runtime prevents concurrent executions on the same session:
import { AgentAlreadyRunningError } from '@helix-agents/core';
const handle1 = await executor.execute(agent, 'First', { sessionId: 'sess-1' });
// Throws AgentAlreadyRunningError
try {
await executor.execute(agent, 'Second', { sessionId: 'sess-1' });
} catch (error) {
if (error instanceof AgentAlreadyRunningError) {
console.log('Session already running');
}
}Protection Mechanisms
| Method | Mechanism |
|---|---|
execute() | Status check + StaleStateError handling |
resume() | CAS (compareAndSetStatus) |
retry() | CAS (compareAndSetStatus) |
When concurrent calls race past status checks, optimistic locking via version numbers ensures only one succeeds.
Execution Flow
Here's how the JS runtime executes an agent:
flowchart TB
Start["execute() called"]
subgraph Init ["1. Initialize state"]
I1["Create run ID and stream ID"]
I2["Parse initial state from schema defaults"]
I3["Add user message"]
I4["Save state to store"]
end
subgraph Loop ["2. Execution loop (while status === 'running')"]
subgraph Build ["3. Build messages"]
B1["Add system prompt"]
B2["Include conversation history"]
end
subgraph LLM ["4. Call LLM"]
L1["Stream text deltas"]
L2["Get tool calls"]
end
subgraph Process ["5. Process step result"]
P1["Check for __finish__ tool"]
P2["Extract output if complete"]
P3["Plan tool executions"]
end
subgraph Tools ["6. Execute tools (parallel)"]
T1["Regular tools: execute directly"]
T2["Sub-agent tools: recursive execute()"]
end
subgraph Update ["7. Update state"]
U1["Add assistant message"]
U2["Add tool results"]
U3["Save to store"]
end
subgraph Stop ["8. Check stop conditions"]
S1["maxSteps reached?"]
S2["stopWhen predicate?"]
S3["Output produced?"]
end
end
Return["9. Return handle immediately<br/>(Execution continues in background)"]
Start --> Init
Init --> Loop
Build --> LLM --> Process --> Tools --> Update --> Stop
Stop -->|Continue| Build
Loop --> ReturnParallel Tool Execution
The JS runtime executes tool calls in parallel:
// If LLM returns multiple tool calls:
// [search('topic A'), search('topic B'), analyze('data')]
// All three execute concurrentlyThis includes sub-agent calls - multiple sub-agents can run simultaneously.
Parallel state updates:
When parallel tools update state, the runtime uses delta merging:
- Array pushes are accumulated (not overwritten)
- Object properties are merged
- Conflicts are resolved via last-write-wins
Sub-Agent Handling
Sub-agents execute recursively within the same process:
// Parent agent calls sub-agent tool
// JS runtime:
// 1. Detects sub-agent tool call
// 2. Creates new state for sub-agent (same streamId)
// 3. Recursively calls runLoop()
// 4. Sub-agent events stream to same stream
// 5. Sub-agent output becomes tool resultSub-agents share the stream but have isolated state.
Persistent Sub-Agent Handling
Persistent sub-agents configured via persistentAgents are managed through companion tools (companion__spawnAgent, companion__sendMessage, etc.). In the JS runtime:
- Blocking spawn: The child agent's full execution loop runs inline within the parent's step. The parent waits for the child to complete before continuing.
- Non-blocking spawn: The child starts immediately and the parent continues. The child runs concurrently via
Promise.resolve(). - sendMessage: Sends a new user message to an already-running persistent child, triggering another execution loop.
- Companion tool results are returned directly as tool results in the parent's conversation.
Persistent children are tracked via SubSessionRef entries with mode: 'persistent' in the state store.
Error Handling
Tool Errors
Tool errors are caught and returned to the LLM:
const searchTool = defineTool({
name: 'search',
execute: async (input) => {
throw new Error('API rate limited');
},
});
// LLM sees: "Tool 'search' failed: API rate limited"
// LLM can decide to retry, use different approach, etc.Execution Errors
Fatal errors fail the agent:
try {
const result = await handle.result();
} catch (error) {
// LLM API failed, state store failed, etc.
}Check result.status for graceful handling:
const result = await handle.result();
if (result.status === 'failed') {
console.error('Agent failed:', result.error);
}Error Classification
When LLM calls fail, errors are automatically classified into typed ErrorChunk events in the stream:
import { HelixError } from '@helix-agents/core';
// Stream error chunks include classification when available
for await (const chunk of stream) {
if (chunk.type === 'error') {
console.log(chunk.error); // 'Provider overloaded'
console.log(chunk.code); // 'provider_overloaded' (from HelixError classification)
console.log(chunk.recoverable); // true (whether client can retry)
}
}The JS runtime's onError callback checks if the error is a HelixError instance. If so, the ErrorChunk includes code and recoverable from the classified error. For unclassified errors, code is omitted and recoverable defaults to false.
See Error Handling Guide for the complete error pipeline.
Limitations
No Crash Recovery
If the process dies, in-flight executions are lost:
// Process starts
const handle = await executor.execute(agent, 'Long task');
// Process crashes here - execution is lost
// After restart, state exists but execution stopped
const reconnected = await executor.getHandle(agent, handle.sessionId);
// reconnected.canResume() returns true
// But original execution context is goneMitigation: Use Redis stores to preserve state, then resume:
// After crash/restart
const handle = await executor.getHandle(agent, savedSessionId);
if (handle) {
const { canResume } = await handle.canResume();
if (canResume) {
const resumed = await handle.resume();
// Continues from last saved state
}
}No Distributed Execution
Everything runs in one process. For distributed execution, use Temporal.
No Per-Tool Timeouts
Tools run without individual timeout enforcement. Add your own:
const toolWithTimeout = defineTool({
name: 'slow_api',
execute: async (input, context) => {
const timeoutPromise = new Promise((_, reject) =>
setTimeout(() => reject(new Error('Tool timeout')), 30000)
);
const apiPromise = callSlowApi(input);
return Promise.race([apiPromise, timeoutPromise]);
},
});Best Practices
1. Use Redis for Production
In-memory stores lose data on restart:
import { RedisStateStore, RedisStreamManager } from '@helix-agents/store-redis';
const executor = new JSAgentExecutor(
new RedisStateStore(redis),
new RedisStreamManager(redis),
llmAdapter
);2. Handle Abort Signals
Check abort signal in long-running tools:
execute: async (input, context) => {
for (const item of items) {
if (context.abortSignal.aborted) {
throw new Error('Aborted');
}
await processItem(item);
}
};3. Set Appropriate maxSteps
Prevent runaway agents:
const agent = defineAgent({
maxSteps: 20, // Reasonable limit for your use case
});4. Monitor Step Count
Track execution progress:
// In your tool
const state = await handle.getState();
console.log(`Step ${state.stepCount} of ${agent.maxSteps}`);Next Steps
- Temporal Runtime - For durable, production workloads
- Cloudflare Runtime - For edge deployment
- Storage: Memory - In-memory stores for development
- Storage: Redis - Production-ready stores