Core Concepts
Before building agents, let's understand the seven core concepts in Helix Agents.
Agents
An agent is a configuration object that defines how an AI assistant behaves. It specifies:
- What the agent knows (system prompt)
- What it can do (tools)
- What data it tracks (state schema)
- What it produces (output schema)
- How it thinks (LLM configuration)
import { defineAgent } from '@helix-agents/core';
import { z } from 'zod';
const ResearchAgent = defineAgent({
name: 'researcher',
systemPrompt: 'You are a research assistant. Search for information and summarize findings.',
tools: [searchTool, summarizeTool],
stateSchema: z.object({
searchCount: z.number().default(0),
findings: z.array(z.string()).default([]),
}),
outputSchema: z.object({
summary: z.string(),
sources: z.array(z.string()),
}),
llmConfig: {
model: openai('gpt-4o'),
temperature: 0.7,
},
maxSteps: 20,
});An agent definition is just data - it doesn't execute anything. You pass it to a runtime to actually run.
Tools
Tools are functions that agents can call. They're how agents interact with the world beyond generating text.
import { defineTool } from '@helix-agents/core';
import { z } from 'zod';
const searchTool = defineTool({
name: 'search',
description: 'Search the web for information',
inputSchema: z.object({
query: z.string().describe('Search query'),
maxResults: z.number().default(5),
}),
outputSchema: z.object({
results: z.array(
z.object({
title: z.string(),
url: z.string(),
snippet: z.string(),
})
),
}),
execute: async (input, context) => {
// Perform the search
const results = await performSearch(input.query, input.maxResults);
return { results };
},
});Tools receive a context object that provides:
getState<T>()- Read current agent stateupdateState<T>(draft => {...})- Modify state using Immeremit(eventName, data)- Emit custom streaming eventsabortSignal- Check for cancellationagentId,agentType- Execution context
State
State is data that persists across an agent's execution steps. There are two types:
Built-in State
Every agent automatically tracks:
messages- Conversation historystepCount- Number of LLM calls madestatus- Running, completed, failed, etc.output- Final structured output (if any)
Custom State
You define additional state with a Zod schema:
const stateSchema = z.object({
searchCount: z.number().default(0),
findings: z.array(z.string()).default([]),
currentTopic: z.string().optional(),
});Tools can read and modify this state:
execute: async (input, context) => {
// Read state
const state = context.getState<typeof stateSchema>();
console.log(`Search count: ${state.searchCount}`);
// Update state using Immer's draft pattern
context.updateState<typeof stateSchema>((draft) => {
draft.searchCount++;
draft.findings.push(input.query);
});
return { results };
};State is persisted to the StateStore after each step, enabling resume after crashes.
Streaming
Streaming provides real-time visibility into agent execution. The framework emits typed events:
| Event Type | Description |
|---|---|
text_delta | Incremental text from LLM |
thinking | Reasoning/thinking content (Claude, o-series) |
tool_start | Tool execution beginning |
tool_end | Tool execution complete (with result) |
subagent_start | Sub-agent invocation beginning |
subagent_end | Sub-agent complete (with output) |
custom | Custom events from tools |
state_patch | State changes (RFC 6902 format) |
error | Error occurred |
output | Final agent output |
run_interrupted | Agent was interrupted by user |
run_resumed | Agent resumed from checkpoint |
checkpoint_created | State checkpoint was saved |
Consume streams in your application:
const handle = await executor.execute(agent, 'Research AI agents');
const stream = await handle.stream();
for await (const chunk of stream) {
switch (chunk.type) {
case 'text_delta':
process.stdout.write(chunk.delta);
break;
case 'tool_start':
console.log(`\nCalling tool: ${chunk.toolName}`);
break;
case 'tool_end':
console.log(`Tool result:`, chunk.result);
break;
case 'output':
console.log('\nFinal output:', chunk.output);
break;
}
}Sessions
A session is the primary unit of conversation state. It contains all messages, custom state, and checkpoints for an agent conversation.
Session vs Run
- Session: A conversation container identified by
sessionId. Persists all conversation state. - Run: A single execution within a session. Multiple runs can occur in one session (after interrupts, resumes, or follow-up messages).
// Start a new session
const handle = await executor.execute(agent, { message: 'Hello' });
console.log(handle.sessionId); // e.g., 'ses_abc123'
// Continue the same session with a follow-up
const handle2 = await executor.execute(agent, { message: 'Tell me more' }, {
sessionId: handle.sessionId // Reuse the session
});
// handle2 continues from where handle left off, with full message historyWhy Sessions Matter
- Efficient Storage: Messages are stored once per session, not duplicated per run (O(n) vs O(n²)).
- Natural Continuity: Reusing a
sessionIdautomatically loads the conversation history. - Clean Isolation: Each session (including sub-agent sessions) has independent state.
Run Tracking
Each execution within a session creates a run with metadata:
// Get the current (latest) run for a session
const currentRun = await stateStore.getCurrentRun(sessionId);
// { runId, turn, status, startSequence, stepCount, ... }
// List all runs in a session's history
const { runs, hasMore } = await stateStore.listRuns(sessionId, { limit: 10 });Run fields:
| Field | Purpose |
|---|---|
runId | Unique identifier for this execution |
turn | Counter (1, 2, 3...) for each run in the session |
status | running, completed, failed, interrupted, superseded |
startSequence | Stream position when this run started (for filtering chunks) |
stepCount | Number of LLM steps in this run |
Use cases for run tracking:
- Multi-run UI: Show execution history for a session
- Content deduplication: Use
startSequenceto filter stream chunks to current run only - Debugging: Trace which run produced which messages
Branching
You can create a new session by branching from an existing one. This is useful for "what-if" scenarios or creating variations of a conversation.
// Branch from a specific checkpoint
const forked = await executor.execute(agent, { message: 'What if instead...' }, {
branch: {
fromSessionId: handle.sessionId,
checkpointId: 'cp_xyz' // Optional: defaults to latest checkpoint
}
});
// forked is a NEW session with state copied from the checkpointKey branching behaviors:
- A new sessionId is generated for the forked session
- State and messages up to the checkpoint are copied to the new session
- The original session remains unchanged and can continue independently
- Checkpoints are not copied—the new session starts fresh for checkpoint tracking
- Both sessions can continue in parallel with diverging histories
// Example: Exploring different conversation paths
const original = await executor.execute(agent, { message: 'Plan a trip to Japan' }, {
sessionId: 'trip-planning'
});
await original.result();
// Original continues with budget focus
const budget = await executor.execute(agent, { message: 'Focus on budget options' }, {
sessionId: 'trip-planning'
});
// Fork to explore luxury options without affecting original
const luxury = await executor.execute(agent, { message: 'Focus on luxury options' }, {
branch: { fromSessionId: 'trip-planning' }
});
// luxury has its own sessionId and can evolve independentlyCloning Sessions
For programmatic session copying, you can use stateStore.cloneSession():
// Clone entire session
await stateStore.cloneSession('source-session', 'target-session');
// Clone up to a specific checkpoint
await stateStore.cloneSession('source-session', 'target-session', {
fromCheckpointId: 'cp_xyz'
});
// Clone first N messages only
await stateStore.cloneSession('source-session', 'target-session', {
fromMessageIndex: 10
});Sub-Agents
Sub-agents enable hierarchical agent systems. A parent agent can delegate tasks to specialized child agents.
// Define a specialized sub-agent
const AnalyzerAgent = defineAgent({
name: 'analyzer',
systemPrompt: 'You analyze text for sentiment and key topics.',
outputSchema: z.object({
sentiment: z.enum(['positive', 'negative', 'neutral']),
topics: z.array(z.string()),
}),
llmConfig: { model: openai('gpt-4o-mini') },
});
// Create a tool that invokes the sub-agent
import { createSubAgentTool } from '@helix-agents/core';
const analyzeTool = createSubAgentTool(AnalyzerAgent, z.object({ text: z.string() }), {
description: 'Analyze text for sentiment and topics',
});
// Parent agent uses the sub-agent tool
const ResearchAgent = defineAgent({
name: 'researcher',
tools: [searchTool, analyzeTool], // Include sub-agent tool
// ...
});When the parent's LLM calls subagent__analyzer, the framework:
- Creates a new agent run for the child
- Executes the child agent to completion
- Returns the child's output as the tool result
- Child events stream to the same stream as the parent
Sub-agents have isolated state - the child's state doesn't affect the parent's.
Hooks
Hooks are callback functions that let you observe and react to agent execution events. Use them for logging, metrics, auditing, and tracing.
const agent = defineAgent({
name: 'researcher',
systemPrompt: 'You are a research assistant.',
hooks: {
onAgentStart: (payload, ctx) => {
console.log(`[${ctx.sessionId}] Starting with input: ${payload.input}`);
},
onAgentComplete: (payload, ctx) => {
console.log(`[${ctx.sessionId}] Completed in ${payload.durationMs}ms`);
},
beforeTool: (payload, ctx) => {
console.log(`[${ctx.sessionId}] Calling: ${payload.tool.name}`);
},
afterTool: (payload, ctx) => {
console.log(`[${ctx.sessionId}] ${payload.tool.name}: ${payload.success ? 'OK' : 'FAILED'}`);
},
},
// ...
});Available hooks include:
| Hook | When Invoked |
|---|---|
onAgentStart | Before first LLM call |
onAgentComplete | Agent finished successfully |
onAgentFail | Agent failed with error |
beforeLLMCall | Before each LLM call |
afterLLMCall | After each LLM response |
beforeTool / afterTool | Around tool execution |
beforeSubAgent / afterSubAgent | Around sub-agent execution |
onStateChange | When state is modified |
onMessage | When message is added |
Hooks receive a context object similar to tools, with access to state and streaming capabilities.
Interrupt and Resume
Interrupt/resume enables user-controlled pauses, crash recovery, and time-travel debugging.
Interrupt
Soft stop that saves state for later resumption:
const handle = await executor.execute(agent, 'Research AI');
// Interrupt the running agent
await handle.interrupt('user_requested');
// Status is now 'interrupted'Resume
Continue execution from where it stopped:
// Resume from last checkpoint
const newHandle = await handle.resume();
const result = await newHandle.result();Resume modes:
continue- Resume from where it stopped (default)with_message- Resume with a new user messagewith_confirmation- Resume with data for a pending toolfrom_checkpoint- Time-travel to a specific checkpoint
Retry
Recover from failed executions:
const result = await handle.result();
if (result.status === 'failed') {
// Retry from checkpoint (default) - often needs message
const retryHandle = await handle.retry({
message: 'Research quantum computing', // Re-provide the triggering message
});
// Or retry from the very beginning
const retryHandle = await handle.retry({
mode: 'from_start',
message: 'Research quantum computing',
});
}Important: When using from_checkpoint mode (default), the original user message that triggered the failure is part of the checkpoint state. You typically need to provide a message option to specify what to retry.
Agent Lifecycle Methods
| Method | Use When | Valid From Status |
|---|---|---|
execute() | New conversation or continuation | Any except running |
resume() | Continue after interrupt/pause | interrupted, paused |
retry() | Recover from failure | failed |
Quick Reference
// New conversation
const h = await executor.execute(agent, message, { sessionId: 'new-id' });
// Continue conversation
const h = await executor.execute(agent, 'follow up', { sessionId: 'existing-id' });
// Resume after interrupt
const h = await executor.resume(agent, sessionId);
// Retry after failure
const h = await executor.retry(agent, sessionId);Checkpoints
Checkpoints are complete state snapshots saved after each step. They enable:
- Crash recovery - Resume after process restarts
- Time-travel - Go back to any previous step
- Branching - Fork execution from a historical point
// List all checkpoints for a session
const checkpoints = await stateStore.listCheckpoints(sessionId);
// Resume from a specific checkpoint
const newHandle = await handle.resume({
mode: 'from_checkpoint',
checkpointId: checkpoints.items[0].id,
});Putting It Together
Here's how these concepts work together in a typical execution:
1. Agent receives input message
↓
2. LLM generates response (streaming text_delta events)
↓
3. LLM requests tool calls
↓
4. Tools execute (streaming tool_start/tool_end events)
- Tools can read/modify state
- Tools can emit custom events
- Sub-agent tools spawn child executions
↓
5. Tool results added to conversation
↓
6. Loop back to step 2 until:
- LLM calls __finish__ tool (structured output)
- Max steps reached
- Error occurs
↓
7. Final output emittedNext Steps
- Getting Started - Build your first agent
- Defining Agents - Deep dive into agent configuration
- Defining Tools - Complete tool reference
- Hooks - Observability and callbacks
- Interrupt & Resume - Pause and continue agents
- Checkpoints - Time-travel and crash recovery