Skip to content

What is Helix Agents?

Helix Agents is a TypeScript framework for building AI agents. It provides composable building blocks that let you choose how your agents execute, where state persists, and which LLM providers to use.

The Problem

Building AI agents typically requires gluing together many pieces:

  • LLM calls - Making requests to language models
  • Tool execution - Running functions the LLM requests
  • State management - Tracking conversation history and custom data
  • Streaming - Delivering real-time updates to users
  • Error handling - Recovering from failures gracefully
  • Multi-agent orchestration - Coordinating multiple agents

Most frameworks couple these concerns together. Change your LLM provider? Rewrite your agents. Need durable execution? Migrate to a different framework. Want Redis instead of in-memory state? Significant refactoring.

The Solution: Composable Building Blocks

Helix Agents separates these concerns into swappable interfaces:

┌─────────────────────────────────────────────────────────┐
│                    Your Agent Code                       │
│  defineAgent({ tools, systemPrompt, outputSchema })     │
└─────────────────────────────────────────────────────────┘


┌─────────────────────────────────────────────────────────┐
│                      Runtime                             │
│  JS Runtime  │  Temporal Runtime  │  Cloudflare Runtime │
└─────────────────────────────────────────────────────────┘
         │              │                    │
         ▼              ▼                    ▼
┌──────────────┐ ┌──────────────┐ ┌──────────────────────┐
│ State Store  │ │Stream Manager│ │     LLM Adapter      │
│ Memory/Redis │ │ Memory/Redis │ │ Vercel AI SDK/Custom │
└──────────────┘ └──────────────┘ └──────────────────────┘

Your agent definition stays the same. Swap the runtime for production. Change the state store for scalability. Use different LLM providers per environment.

Philosophy: Bring Your Own Loop

The framework is built on two core principles:

1. Interface-First Design

Every major component is defined by an interface:

  • AgentExecutor - How agent loops run
  • StateStore - Where state persists
  • StreamManager - How events flow
  • LLMAdapter - Which model to use

Use our implementations, or create your own. The interfaces are simple enough to implement in an afternoon.

2. Pure Functions for Orchestration

The core package (@helix-agents/core) provides pure functions that runtimes compose:

typescript
import {
  initializeAgentState,
  buildMessagesForLLM,
  planStepProcessing,
  shouldStopExecution,
} from '@helix-agents/core';

// These are pure data transformations - no I/O
const state = initializeAgentState({ agent, input, runId, streamId });
const messages = buildMessagesForLLM(state.messages, agent.systemPrompt, state.customState);
const plan = planStepProcessing(stepResult, { outputSchema: agent.outputSchema });
const shouldStop = shouldStopExecution(stepResult, state.stepCount, options);

This means you can build a completely custom execution loop using these primitives. The JS runtime, Temporal runtime, and Cloudflare runtime all compose these same functions differently.

Use pre-built or build your own - the choice is yours.

Packages Overview

PackagePurpose
@helix-agents/coreTypes, interfaces, pure orchestration functions
@helix-agents/sdkQuick-start bundle (core + memory stores + JS runtime)
@helix-agents/runtime-jsIn-process JavaScript execution
@helix-agents/runtime-temporalDurable workflows via Temporal
@helix-agents/runtime-cloudflareEdge deployment on Cloudflare Workers
@helix-agents/store-memoryIn-memory state and streams (development)
@helix-agents/store-redisRedis state and streams (production)
@helix-agents/store-cloudflareD1 + Durable Objects (Cloudflare)
@helix-agents/llm-vercelVercel AI SDK adapter
@helix-agents/ai-sdkFrontend integration for AI SDK UI

When to Use Helix Agents

Good fit:

  • You need to swap runtimes between development and production
  • You want durable execution (Temporal) or edge deployment (Cloudflare)
  • You're building multi-agent systems with parent-child orchestration
  • You need real-time streaming with custom events
  • You want type-safe agents with Zod schemas

Maybe not the best fit:

  • Simple one-off scripts (just use the Vercel AI SDK directly)
  • You don't need runtime/storage swapping
  • You need a specific framework's ecosystem (LangChain's integrations)

Next Steps

Released under the MIT License.