Skip to content

Building Your Own Agent Loop

This guide shows how to build a custom agent executor using Helix's core orchestration functions. This is for advanced use cases where the built-in runtimes don't fit your needs.

When to Build Custom

Build your own loop when you need:

  • Integration with a custom workflow engine
  • Specialized execution patterns (batching, priority queues)
  • Non-standard state persistence
  • Custom retry or recovery logic
  • Learning how the framework works internally

For most use cases, use the built-in runtimes:

Core Functions

Helix provides pure functions for each part of the agent loop:

FunctionPurpose
initializeAgentStateCreate initial state from input
buildMessagesForLLMPrepare messages with system prompt
buildEffectiveToolsGet tools including __finish__
planStepProcessingAnalyze LLM result, plan next actions
shouldStopExecutionCheck if agent should stop
createAssistantMessageFormat assistant message for history
createToolResultMessageFormat tool result for history

These functions have no side effects—they just transform data. Your custom loop handles all I/O.

Basic Structure

typescript
import {
  initializeAgentState,
  buildMessagesForLLM,
  buildEffectiveTools,
  planStepProcessing,
  shouldStopExecution,
  createAssistantMessage,
  createToolResultMessage,
} from '@helix-agents/core';
import type { AgentState, AgentConfig, Tool, Message } from '@helix-agents/core';

async function runAgent<TState, TOutput>(
  agent: AgentConfig,
  input: string,
  llmAdapter: LLMAdapter,
  stateStore: StateStore,
  streamManager: StreamManager
): Promise<TOutput | undefined> {
  const runId = generateRunId();
  const streamId = runId;

  // 1. Initialize state
  let state = initializeAgentState({
    agent,
    input,
    runId,
    streamId,
  });

  // 2. Persist initial state
  await stateStore.save(state); // runId is inside state object
  const writer = await streamManager.createWriter(streamId, runId, agent.name);

  // 3. Get effective tools (includes __finish__ if outputSchema exists)
  const tools = buildEffectiveTools(agent);

  // 4. Main execution loop
  while (true) {
    // Build messages for LLM
    const messages = buildMessagesForLLM(state.messages, agent.systemPrompt, state.customState);

    // Call LLM
    const stepResult = await llmAdapter.generateStep({
      messages,
      tools,
      llmConfig: agent.llmConfig,
    });

    // Emit streaming events
    // (handle text_delta, tool_start, etc. from stepResult)

    // Plan what to do with the result
    const plan = planStepProcessing(stepResult, {
      outputSchema: agent.outputSchema,
    });

    // Add assistant message to history
    if (plan.assistantMessagePlan) {
      const assistantMessage = createAssistantMessage(plan.assistantMessagePlan);
      state.messages.push(assistantMessage);
    }

    // Execute pending tool calls
    for (const toolCall of plan.pendingToolCalls) {
      const result = await executeToolCall(toolCall, tools, state);

      // Add tool result to history
      const resultMessage = createToolResultMessage({
        toolCallId: toolCall.id,
        toolName: toolCall.name,
        result: result.success ? result.value : undefined,
        success: result.success,
        error: result.error,
      });
      state.messages.push(resultMessage);
    }

    // Update step count
    state.stepCount++;

    // Apply status update if present
    if (plan.statusUpdate) {
      state.status = plan.statusUpdate.status;
      if (plan.statusUpdate.output) {
        state.output = plan.statusUpdate.output;
      }
      if (plan.statusUpdate.error) {
        state.error = plan.statusUpdate.error;
      }
    }

    // Persist state
    await stateStore.save(state);

    // Check if we should stop
    if (
      plan.isTerminal ||
      shouldStopExecution(stepResult, state.stepCount, {
        maxSteps: agent.maxSteps,
        stopWhen: agent.stopWhen,
      })
    ) {
      break;
    }
  }

  // 5. Finalize
  await writer.close();
  await streamManager.endStream(streamId, state.output);

  return state.output as TOutput;
}

Step-by-Step Breakdown

1. Initialize State

typescript
import { initializeAgentState } from '@helix-agents/core';

const state = initializeAgentState({
  agent, // Agent configuration
  input, // User message (string or { message, state })
  runId, // Unique run identifier
  streamId, // Stream identifier (usually same as runId)
  parentAgentId, // Optional: for sub-agents
});

This function:

  • Parses the input into message and optional initial state
  • Applies state schema defaults via Zod
  • Creates the initial AgentState structure
  • Adds the user message to state.messages

2. Build Messages for LLM

typescript
import { buildMessagesForLLM } from '@helix-agents/core';

const messages = buildMessagesForLLM(
  state.messages, // Conversation history
  agent.systemPrompt, // String or function
  state.customState // Passed to function-based prompts
);

This function:

  • Resolves dynamic system prompts (calls the function if needed)
  • Prepends the system message to the conversation
  • Returns messages ready for the LLM adapter

3. Build Effective Tools

typescript
import { buildEffectiveTools } from '@helix-agents/core';

const tools = buildEffectiveTools(agent);

This function:

  • Returns the agent's tools array
  • Adds the __finish__ tool if outputSchema is defined
  • The __finish__ tool allows the LLM to signal completion with structured output

4. Process Step Results

typescript
import { planStepProcessing } from '@helix-agents/core';

const plan = planStepProcessing(stepResult, {
  outputSchema: agent.outputSchema,
});

The plan tells you:

  • assistantMessagePlan: Data for creating the assistant message
  • pendingToolCalls: Tool calls to execute
  • pendingSubAgentCalls: Sub-agents to invoke
  • statusUpdate: Status change to apply (if terminal)
  • isTerminal: Whether execution should stop
  • output: The parsed output (if __finish__ was called)

5. Create Messages

typescript
import { createAssistantMessage, createToolResultMessage } from '@helix-agents/core';

// Assistant message (add after LLM response)
const assistantMessage = createAssistantMessage(plan.assistantMessagePlan);
state.messages.push(assistantMessage);

// Tool result (add after each tool execution)
const toolResult = createToolResultMessage({
  toolCallId: toolCall.id,
  toolName: toolCall.name,
  result: executionResult.value,
  success: true,
});
state.messages.push(toolResult);

6. Check Stop Conditions

typescript
import { shouldStopExecution } from '@helix-agents/core';

const shouldStop = shouldStopExecution(stepResult, state.stepCount, {
  maxSteps: agent.maxSteps,
  stopWhen: agent.stopWhen,
});

This checks:

  • Terminal step types (structured_output, error with shouldStop)
  • Maximum steps exceeded
  • Custom stop condition

Complete Example

Here's a complete minimal executor:

typescript
import {
  defineAgent,
  defineTool,
  initializeAgentState,
  buildMessagesForLLM,
  buildEffectiveTools,
  planStepProcessing,
  shouldStopExecution,
  createAssistantMessage,
  createToolResultMessage,
} from '@helix-agents/core';
import { InMemoryStateStore, InMemoryStreamManager } from '@helix-agents/store-memory';
import { VercelAIAdapter } from '@helix-agents/llm-vercel';
import { openai } from '@ai-sdk/openai';
import { z } from 'zod';

// Define a simple agent
const CalculatorAgent = defineAgent({
  name: 'calculator',
  systemPrompt: 'You are a calculator. Use the calculate tool to perform math.',

  stateSchema: z.object({
    history: z.array(z.string()).default([]),
  }),

  outputSchema: z.object({
    result: z.number(),
    explanation: z.string(),
  }),

  tools: [
    defineTool({
      name: 'calculate',
      description: 'Perform a calculation',
      inputSchema: z.object({
        expression: z.string().describe('Math expression to evaluate'),
      }),
      outputSchema: z.object({
        result: z.number(),
      }),
      execute: async (input, context) => {
        // Simple eval (don't use in production!)
        const result = eval(input.expression);

        // Update state
        context.updateState((draft) => {
          draft.history.push(`${input.expression} = ${result}`);
        });

        return { result };
      },
    }),
  ],

  llmConfig: {
    model: openai('gpt-4o-mini'),
  },

  maxSteps: 10,
});

// Custom executor
async function runCalculator(expression: string) {
  const stateStore = new InMemoryStateStore();
  const streamManager = new InMemoryStreamManager();
  const llmAdapter = new VercelAIAdapter();

  const runId = `calc-${Date.now()}`;
  const agent = CalculatorAgent;

  // Initialize
  let state = initializeAgentState({
    agent,
    input: expression,
    runId,
    streamId: runId,
  });

  await stateStore.save(state);
  const streamWriter = await streamManager.createWriter(runId, runId, agent.name);

  const tools = buildEffectiveTools(agent);

  // Find tool implementations
  const toolMap = new Map((agent.tools ?? []).map((t) => [t.name, t]));

  // Main loop
  while (state.status === 'running') {
    const messages = buildMessagesForLLM(state.messages, agent.systemPrompt, state.customState);

    console.log(`Step ${state.stepCount + 1}...`);

    // Call LLM
    const stepResult = await llmAdapter.generateStep({
      messages,
      tools,
      llmConfig: agent.llmConfig,
    });

    // Plan processing
    const plan = planStepProcessing(stepResult, {
      outputSchema: agent.outputSchema,
    });

    // Add assistant message
    if (plan.assistantMessagePlan) {
      state.messages.push(createAssistantMessage(plan.assistantMessagePlan));
    }

    // Execute tool calls
    for (const toolCall of plan.pendingToolCalls) {
      console.log(`  Tool: ${toolCall.name}`, toolCall.arguments);

      const tool = toolMap.get(toolCall.name);
      if (!tool) {
        state.messages.push(
          createToolResultMessage({
            toolCallId: toolCall.id,
            toolName: toolCall.name,
            success: false,
            error: `Unknown tool: ${toolCall.name}`,
          })
        );
        continue;
      }

      try {
        // Create tool context
        const context = {
          getState: () => state.customState,
          updateState: (fn: (draft: unknown) => void) => {
            // Simple mutation (use Immer in production)
            fn(state.customState);
          },
          emit: async () => {},
          agentId: runId,
          agentType: agent.name,
        };

        const result = await tool.execute(toolCall.arguments, context);
        console.log(`  Result:`, result);

        state.messages.push(
          createToolResultMessage({
            toolCallId: toolCall.id,
            toolName: toolCall.name,
            result,
            success: true,
          })
        );
      } catch (error) {
        state.messages.push(
          createToolResultMessage({
            toolCallId: toolCall.id,
            toolName: toolCall.name,
            success: false,
            error: error instanceof Error ? error.message : 'Unknown error',
          })
        );
      }
    }

    // Update state
    state.stepCount++;

    if (plan.statusUpdate) {
      state.status = plan.statusUpdate.status;
      state.output = plan.statusUpdate.output;
      state.error = plan.statusUpdate.error;
    }

    await stateStore.save(state);

    // Check stop
    if (
      plan.isTerminal ||
      shouldStopExecution(stepResult, state.stepCount, {
        maxSteps: agent.maxSteps,
      })
    ) {
      break;
    }
  }

  await streamWriter.close();
  await streamManager.endStream(runId);

  console.log('\nFinal output:', state.output);
  console.log('Calculation history:', state.customState.history);

  return state.output;
}

// Run it
runCalculator('What is 15 * 7 + 23?');

Adding Streaming

To emit stream events, use the stream manager:

typescript
const writer = await streamManager.createWriter(streamId, runId, agent.name);

// During LLM call, capture streaming events
const stepResult = await llmAdapter.generateStep({
  messages,
  tools,
  llmConfig: agent.llmConfig,
  agentId: runId,
  agentType: agent.name,
  callbacks: {
    onTextDelta: (delta) => {
      writer.write({
        type: 'text_delta',
        delta,
        agentId: runId,
        timestamp: Date.now(),
      });
    },
    onToolStart: async (toolCallId, toolName, args) => {
      await writer.write({
        type: 'tool_start',
        toolCallId,
        toolName,
        arguments: args,
        agentId: runId,
        timestamp: Date.now(),
      });
    },
    onToolEnd: async (toolCallId, result) => {
      await writer.write({
        type: 'tool_end',
        toolCallId,
        result,
        agentId: runId,
        timestamp: Date.now(),
      });
    },
  },
});

Adding Sub-Agent Support

For sub-agent execution:

typescript
import { isSubAgentTool, parseSubAgentInput } from '@helix-agents/core';

// In your tool execution loop
for (const toolCall of plan.pendingToolCalls) {
  if (isSubAgentTool(toolCall.name)) {
    // This is a sub-agent invocation
    const { agentType, input } = parseSubAgentInput(toolCall);

    // Find the sub-agent definition
    const subAgent = registry.get(agentType);

    // Run the sub-agent (recursively or via workflow)
    const subResult = await runAgent(subAgent, input, llmAdapter, stateStore, streamManager);

    // Add result to parent's messages
    state.messages.push(
      createSubAgentResultMessage({
        toolCallId: toolCall.id,
        agentType,
        result: subResult,
        success: true,
      })
    );
  } else {
    // Regular tool execution...
  }
}

State Tracking with Immer

For proper state tracking with patches:

typescript
import { ImmerStateTracker } from '@helix-agents/core';

// Create tracker
const tracker = new ImmerStateTracker(state.customState);

// In tool context
const context = {
  getState: () => tracker.getState(),
  updateState: (fn) => tracker.update(fn),
  // ...
};

// After tool execution
const patches = tracker.getPatches();
if (patches.length > 0) {
  // Emit state patch events
  await writer.write({
    type: 'state_patch',
    patches,
    agentId: runId,
    timestamp: Date.now(),
  });
}

// Save final state
state.customState = tracker.getState();

When to Use Built-In Runtimes

The built-in runtimes handle many edge cases:

ConcernJS RuntimeTemporalCloudflare
Crash recoveryNoYesYes
Parallel toolsYesYesYes
Sub-agentsYesYesYes
State patchesYesYesYes
StreamingYesYesYes
Abort handlingYesYesYes

Building custom is only necessary when these runtimes don't fit your infrastructure.

Next Steps

Released under the MIT License.