What is Helix Agents?
Helix Agents is a TypeScript framework for building AI agents. It provides composable building blocks that let you choose how your agents execute, where state persists, and which LLM providers to use.
The Problem
Building AI agents typically requires gluing together many pieces:
- LLM calls - Making requests to language models
- Tool execution - Running functions the LLM requests
- State management - Tracking conversation history and custom data
- Streaming - Delivering real-time updates to users
- Error handling - Recovering from failures gracefully
- Multi-agent orchestration - Coordinating multiple agents
Most frameworks couple these concerns together. Change your LLM provider? Rewrite your agents. Need durable execution? Migrate to a different framework. Want Redis instead of in-memory state? Significant refactoring.
The Solution: Composable Building Blocks
Helix Agents separates these concerns into swappable interfaces:
┌─────────────────────────────────────────────────────────┐
│ Your Agent Code │
│ defineAgent({ tools, systemPrompt, outputSchema }) │
└─────────────────────────────────────────────────────────┘
│
▼
┌─────────────────────────────────────────────────────────┐
│ Runtime │
│ JS Runtime │ Temporal Runtime │ Cloudflare Runtime │
└─────────────────────────────────────────────────────────┘
│ │ │
▼ ▼ ▼
┌──────────────┐ ┌──────────────┐ ┌──────────────────────┐
│ State Store │ │Stream Manager│ │ LLM Adapter │
│ Memory/Redis │ │ Memory/Redis │ │ Vercel AI SDK/Custom │
└──────────────┘ └──────────────┘ └──────────────────────┘Your agent definition stays the same. Swap the runtime for production. Change the state store for scalability. Use different LLM providers per environment.
Philosophy: Bring Your Own Loop
The framework is built on two core principles:
1. Interface-First Design
Every major component is defined by an interface:
AgentExecutor- How agent loops runStateStore- Where state persistsStreamManager- How events flowLLMAdapter- Which model to use
Use our implementations, or create your own. The interfaces are simple enough to implement in an afternoon.
2. Pure Functions for Orchestration
The core package (@helix-agents/core) provides pure functions that runtimes compose:
import {
initializeAgentState,
buildMessagesForLLM,
planStepProcessing,
shouldStopExecution,
} from '@helix-agents/core';
// These are pure data transformations - no I/O
const state = initializeAgentState({ agent, input, runId, streamId });
const messages = buildMessagesForLLM(state.messages, agent.systemPrompt, state.customState);
const plan = planStepProcessing(stepResult, { outputSchema: agent.outputSchema });
const shouldStop = shouldStopExecution(stepResult, state.stepCount, options);This means you can build a completely custom execution loop using these primitives. The JS runtime, Temporal runtime, and Cloudflare runtime all compose these same functions differently.
Use pre-built or build your own - the choice is yours.
Packages Overview
| Package | Purpose |
|---|---|
@helix-agents/core | Types, interfaces, pure orchestration functions |
@helix-agents/sdk | Quick-start bundle (core + memory stores + JS runtime) |
@helix-agents/runtime-js | In-process JavaScript execution |
@helix-agents/runtime-temporal | Durable workflows via Temporal |
@helix-agents/runtime-cloudflare | Edge deployment on Cloudflare Workers |
@helix-agents/store-memory | In-memory state and streams (development) |
@helix-agents/store-redis | Redis state and streams (production) |
@helix-agents/store-cloudflare | D1 + Durable Objects (Cloudflare) |
@helix-agents/llm-vercel | Vercel AI SDK adapter |
@helix-agents/ai-sdk | Frontend integration for AI SDK UI |
When to Use Helix Agents
Good fit:
- You need to swap runtimes between development and production
- You want durable execution (Temporal) or edge deployment (Cloudflare)
- You're building multi-agent systems with parent-child orchestration
- You need real-time streaming with custom events
- You want type-safe agents with Zod schemas
Maybe not the best fit:
- Simple one-off scripts (just use the Vercel AI SDK directly)
- You don't need runtime/storage swapping
- You need a specific framework's ecosystem (LangChain's integrations)
Next Steps
- Core Concepts - Understand agents, tools, state, and streaming
- Getting Started - Build your first agent
- Runtime Comparison - Choose the right runtime for your use case