Defining Agents
An agent is a configuration object that defines how an AI assistant behaves. This guide covers all configuration options available when defining agents.
Basic Agent Definition
Use defineAgent() to create an agent:
import { defineAgent } from '@helix-agents/sdk';
import { openai } from '@ai-sdk/openai';
import { z } from 'zod';
const MyAgent = defineAgent({
name: 'my-agent',
systemPrompt: 'You are a helpful assistant.',
llmConfig: {
model: openai('gpt-4o'),
},
});Agent Definition is Just Data
defineAgent() returns a configuration object - it doesn't execute anything. You pass this configuration to a runtime (like JSAgentExecutor) to actually run the agent. This separation enables the same agent definition to work across different runtimes.
Configuration Reference
name (required)
A unique identifier for this agent type. Used for logging, state storage, and sub-agent identification.
const agent = defineAgent({
name: 'research-assistant', // Must be unique across your application
// ...
});description
Optional description of what the agent does. Used in sub-agent tools when no custom description is provided.
const agent = defineAgent({
name: 'analyzer',
description: 'Analyzes text for sentiment and key topics',
// ...
});systemPrompt (required)
Instructions for the LLM. Can be a static string or a function that receives the current custom state.
Static string:
const agent = defineAgent({
name: 'helper',
systemPrompt: `You are a helpful assistant.
Be concise but thorough in your responses.
Always cite sources when making factual claims.`,
// ...
});Dynamic function:
The function receives the agent's custom state, enabling dynamic prompts:
const agent = defineAgent({
name: 'contextual-helper',
stateSchema: z.object({
userName: z.string().default('User'),
expertise: z.enum(['beginner', 'intermediate', 'expert']).default('intermediate'),
}),
systemPrompt: (state) => `You are helping ${state.userName}.
Their expertise level is ${state.expertise}.
${state.expertise === 'beginner' ? 'Explain concepts simply.' : ''}
${state.expertise === 'expert' ? 'Use technical terminology freely.' : ''}`,
// ...
});State During Initialization
During the first LLM call, customState contains default values from your schema. If your dynamic prompt relies on state set by tools, handle the initial case gracefully.
tools
Array of tools the agent can use. Includes both regular tools and sub-agent tools.
import { defineTool, createSubAgentTool } from '@helix-agents/sdk';
const searchTool = defineTool({
name: 'search',
description: 'Search the web',
inputSchema: z.object({ query: z.string() }),
outputSchema: z.object({ results: z.array(z.string()) }),
execute: async (input) => ({ results: ['Result 1', 'Result 2'] }),
});
const analyzerSubAgent = createSubAgentTool(AnalyzerAgent, z.object({ text: z.string() }), {
description: 'Analyze text for sentiment',
});
const agent = defineAgent({
name: 'orchestrator',
tools: [searchTool, analyzerSubAgent],
// ...
});stateSchema
Zod schema for custom state data that persists across agent steps. Tools can read and modify this state.
const agent = defineAgent({
name: 'research-assistant',
stateSchema: z.object({
// Primitives with defaults
searchCount: z.number().default(0),
currentPhase: z.enum(['searching', 'analyzing', 'summarizing']).default('searching'),
// Arrays default to empty
findings: z.array(z.string()).default([]),
// Nested objects
metadata: z
.object({
startedAt: z.number().optional(),
topic: z.string().optional(),
})
.default({}),
}),
// ...
});Guidelines:
- Always provide
.default()for fields - state is initialized from defaults - State must be JSON-serializable (no functions, Dates become strings, etc.)
- Keep state minimal - only store what tools need to coordinate
outputSchema
Zod schema for structured output. When provided, a __finish__ tool is automatically injected. The agent completes when the LLM calls this tool with valid data.
const agent = defineAgent({
name: 'analyzer',
outputSchema: z.object({
sentiment: z.enum(['positive', 'negative', 'neutral']),
confidence: z.number().min(0).max(1),
topics: z.array(z.string()),
summary: z.string(),
}),
// ...
});Without outputSchema, the agent runs until maxSteps or a stopWhen condition.
llmConfig (required)
Configuration for the language model:
const agent = defineAgent({
name: 'my-agent',
llmConfig: {
// Required: Vercel AI SDK model instance
model: openai('gpt-4o'),
// Optional parameters
temperature: 0.7, // 0-2, higher = more creative
maxOutputTokens: 4096, // Max tokens in response
topP: 0.9, // Nucleus sampling
topK: 40, // Top-k sampling
presencePenalty: 0.1, // Discourage repetition
frequencyPenalty: 0.1, // Discourage frequent tokens
stopSequences: ['END'], // Stop generation on these
seed: 12345, // For deterministic outputs
maxRetries: 3, // API retry attempts
// Provider-specific options
providerOptions: {
// OpenAI reasoning (o1, o3, o4-mini)
openai: {
reasoningSummary: 'detailed',
reasoningEffort: 'high',
},
// Anthropic extended thinking (Claude)
anthropic: {
thinking: { type: 'enabled', budgetTokens: 10000 },
},
},
},
// ...
});llmConfigOverride
Function to override LLM config per-step based on state and step count. Useful for adaptive behavior:
const agent = defineAgent({
name: 'adaptive-agent',
stateSchema: z.object({
complexity: z.enum(['simple', 'complex']).default('simple'),
}),
llmConfig: {
model: openai('gpt-4o-mini'), // Default to smaller model
temperature: 0.7,
},
llmConfigOverride: (customState, stepCount) => {
// Switch to larger model for complex tasks
if (customState.complexity === 'complex') {
return {
model: openai('gpt-4o'),
temperature: 0.5,
};
}
// Reduce temperature as we progress (more focused)
if (stepCount > 5) {
return { temperature: 0.3 };
}
return {}; // Use defaults
},
// ...
});The override is merged with llmConfig - override values take precedence.
maxSteps
Maximum number of LLM calls before the agent stops. Default is 50.
const agent = defineAgent({
name: 'quick-responder',
maxSteps: 5, // Stop after 5 LLM calls
// ...
});This is a safety limit. Agents with outputSchema typically finish sooner by calling __finish__.
stopWhen
Custom predicate for stopping execution. Called after each step with the step result:
const agent = defineAgent({
name: 'conditional-stopper',
stopWhen: (result) => {
// Stop if a specific tool was called
if (result.toolCalls?.some((tc) => tc.name === 'final_answer')) {
return true;
}
// Stop if LLM indicates completion in text
if (result.text?.includes('[DONE]')) {
return true;
}
return false;
},
// ...
});Complete Example
Here's a fully configured agent:
import { defineAgent, defineTool } from '@helix-agents/sdk';
import { openai } from '@ai-sdk/openai';
import { z } from 'zod';
// State schema
const StateSchema = z.object({
searchCount: z.number().default(0),
findings: z
.array(
z.object({
query: z.string(),
results: z.array(z.string()),
})
)
.default([]),
currentPhase: z.enum(['searching', 'analyzing', 'complete']).default('searching'),
});
// Output schema
const OutputSchema = z.object({
summary: z.string(),
keyFindings: z.array(z.string()),
sourcesUsed: z.number(),
});
// Tool
const searchTool = defineTool({
name: 'search',
description: 'Search for information',
inputSchema: z.object({ query: z.string() }),
outputSchema: z.object({ results: z.array(z.string()) }),
execute: async (input, context) => {
// Track usage in state
context.updateState<z.infer<typeof StateSchema>>((draft) => {
draft.searchCount++;
draft.findings.push({ query: input.query, results: [] });
});
// Perform search...
return { results: ['Result 1', 'Result 2'] };
},
});
// Agent definition
const ResearchAgent = defineAgent({
name: 'research-assistant',
description: 'Researches topics and provides summaries',
systemPrompt: (state) => `You are a research assistant.
Current phase: ${state.currentPhase}
Searches performed: ${state.searchCount}
Instructions:
1. Use the search tool to find information
2. Analyze findings thoroughly
3. Call __finish__ with your summary when done`,
tools: [searchTool],
stateSchema: StateSchema,
outputSchema: OutputSchema,
llmConfig: {
model: openai('gpt-4o'),
temperature: 0.7,
maxOutputTokens: 4096,
},
llmConfigOverride: (state, stepCount) => {
// More focused as we approach completion
if (state.currentPhase === 'analyzing') {
return { temperature: 0.3 };
}
return {};
},
maxSteps: 20,
stopWhen: (result) => {
// Also stop if max searches reached
return result.toolCalls?.length === 0 && result.text?.includes('cannot find');
},
});Type Safety
Agent definitions are fully typed. TypeScript infers types from your schemas:
// State type is inferred from stateSchema
type State = z.infer<typeof StateSchema>;
// { searchCount: number; findings: {...}[]; currentPhase: 'searching' | 'analyzing' | 'complete' }
// Output type is inferred from outputSchema
type Output = z.infer<typeof OutputSchema>;
// { summary: string; keyFindings: string[]; sourcesUsed: number }
// Agent type includes these
const agent: Agent<typeof StateSchema, typeof OutputSchema> = ResearchAgent;Next Steps
- Defining Tools - Create tools for your agents
- State Management - Deep dive into state patterns
- Sub-Agents - Multi-agent orchestration