AI SDK Package
The @helix-agents/ai-sdk package bridges Helix Agents with Vercel AI SDK frontend hooks. It transforms Helix's internal streaming protocol to the AI SDK UI Data Stream format.
Installation
npm install @helix-agents/ai-sdkFrontendHandler
The main class for handling frontend requests:
import { createFrontendHandler } from '@helix-agents/ai-sdk';
const handler = createFrontendHandler({
streamManager,
executor,
agent: MyAgent,
stateStore, // Optional: for getMessages()
transformerOptions: { ... }, // Optional: customize transformation
logger: console, // Optional: debug logging
});Request Modes
POST Mode - Execute new agent:
const response = await handler.handleRequest({
method: 'POST',
body: {
message: 'Hello, agent!',
state: { initialValue: 42 }, // Optional initial state
},
});GET Mode - Stream existing execution:
const response = await handler.handleRequest({
method: 'GET',
streamId: 'run-123',
resumeAt: lastEventId, // Optional: resume from position
});Response Handling
The handler returns a framework-agnostic response:
interface FrontendResponse {
status: number;
headers: Record<string, string>;
body: ReadableStream<Uint8Array> | string;
}Convert to your framework's response:
// Hono / Web standards
return new Response(response.body, {
status: response.status,
headers: response.headers,
});
// Express (use pipeToExpress helper)
import { pipeToExpress } from '@helix-agents/ai-sdk/adapters/express';
await pipeToExpress(response, res);Loading Message History
Load conversation history for useChat initialMessages:
const { messages, hasMore } = await handler.getMessages(runId, {
// Pagination
offset: 0,
limit: 50,
// Content options
includeReasoning: true, // Include thinking content
includeToolResults: true, // Merge tool results into messages
// Custom ID generation
generateId: (index, msg) => `msg-${index}`,
});
// Use with useChat
const { messages } = useChat({
initialMessages: messages,
});StreamTransformer
Transforms individual Helix chunks to AI SDK events:
import { StreamTransformer } from '@helix-agents/ai-sdk';
const transformer = new StreamTransformer({
// Custom message ID generation
generateMessageId: (agentId) => `msg-${agentId}`,
// Include step boundary events
includeStepEvents: false,
// Filter chunks
chunkFilter: (chunk) => chunk.type !== 'state_patch',
// Debug logging
logger: console,
});Transformation Flow
// Stream processing
for await (const chunk of helixStream) {
const { events, sequence } = transformer.transform(chunk);
for (const event of events) {
// Emit SSE with optional event ID for resumability
yield { event, sequence };
}
}
// Always finalize to close blocks and emit finish
const { events } = transformer.finalize();
for (const event of events) {
yield event;
}Event Mapping
| Helix Chunk | AI SDK Events |
|---|---|
text_delta | text-start (once), text-delta |
thinking | reasoning-start (once), reasoning-delta, reasoning-end (if complete) |
tool_start | text-end (if text open), tool-input-available |
tool_end | tool-output-available |
subagent_start | data-subagent-start |
subagent_end | data-subagent-end |
custom | data-{eventName} |
state_patch | data-state-patch |
error | error |
output | data-output |
Block Management
The transformer manages text and reasoning blocks:
// First text_delta opens a text block
// { type: 'text-start', id: 'block-1' }
// { type: 'text-delta', id: 'block-1', delta: 'Hello' }
// Switching to tool_start closes the text block
// { type: 'text-end', id: 'block-1' }
// { type: 'tool-input-available', ... }
// New text_delta opens a new block
// { type: 'text-start', id: 'block-2' }Message Converter
Converts Helix internal messages to AI SDK v5 UIMessage format:
import { convertToUIMessages } from '@helix-agents/ai-sdk';
const uiMessages = convertToUIMessages(helixMessages, {
generateId: (index, msg) => `msg-${index}`,
includeReasoning: true,
includeToolResults: true,
});AI SDK v5 Format
The converter produces AI SDK v5 UIMessage format:
interface UIMessage {
id: string;
role: 'user' | 'assistant' | 'system';
parts: UIMessagePart[]; // v5: parts is the source of truth
}
type UIMessagePart =
| { type: 'text'; text: string }
| { type: 'reasoning'; text: string }
| {
type: `tool-${string}`;
toolCallId: string;
input: Record<string, unknown>;
state: ToolInvocationState;
output?: unknown;
};Conversion Rules
- System messages → Single text part
- User messages → Single text part
- Assistant messages → Text, reasoning, and tool parts
- Tool result messages → Merged into assistant's tool parts (not separate messages)
// Helix messages
[
{ role: 'user', content: 'Hello' },
{ role: 'assistant', content: 'Let me search...', toolCalls: [...] },
{ role: 'tool', toolCallId: 'tc1', content: '{"result": "..."}' },
]
// Converted to UI messages (v5 format)
[
{ id: 'msg-0', role: 'user', parts: [{ type: 'text', text: 'Hello' }] },
{
id: 'msg-1',
role: 'assistant',
parts: [
{ type: 'text', text: 'Let me search...' },
{ type: 'tool-search', toolCallId: 'tc1', input: {...}, state: 'output-available', output: {...} }
]
},
]SSE Response Builder
Build Server-Sent Events responses:
import { buildSSEResponse, createSSEStream, createSSEHeaders } from '@helix-agents/ai-sdk';
// Full response builder
const response = buildSSEResponse(eventsGenerator, {
headers: { 'X-Custom-Header': 'value' },
});
// Or build manually
const headers = createSSEHeaders({ 'X-Custom': 'value' });
const stream = createSSEStream(eventsGenerator);SSE Format
Events are formatted as SSE:
id: 1
data: {"type":"text-delta","id":"block-1","delta":"Hello"}
id: 2
data: {"type":"text-delta","id":"block-1","delta":" world"}
data: {"type":"finish"}The id: field enables stream resumability.
Header Utilities
Extract resume position from headers:
import { extractResumePosition, AI_SDK_UI_HEADER } from '@helix-agents/ai-sdk';
// From Last-Event-ID header (automatic reconnection)
const lastEventId = request.headers.get('Last-Event-ID');
const resumeAt = extractResumePosition(lastEventId);
// AI SDK UI header for detection
// 'X-AI-SDK-UI': 'vercel-ai-sdk-ui'
const isAISDK = request.headers.get(AI_SDK_UI_HEADER) === AI_SDK_UI_HEADER_VALUE;Typed Errors
All errors extend FrontendHandlerError:
import {
FrontendHandlerError,
ValidationError,
StreamNotFoundError,
StreamFailedError,
ConfigurationError,
ExecutionError,
StreamCreationError,
} from '@helix-agents/ai-sdk';Error Types
| Error | Code | Status | When |
|---|---|---|---|
ValidationError | VALIDATION_ERROR | 400 | Missing/invalid request params |
StreamNotFoundError | STREAM_NOT_FOUND | 404 | Stream doesn't exist |
StreamFailedError | STREAM_FAILED | 410 | Stream has failed |
ConfigurationError | CONFIGURATION_ERROR | 501 | Missing configuration |
ExecutionError | EXECUTION_ERROR | 500 | Agent execution failed |
StreamCreationError | STREAM_CREATION_ERROR | 500 | Stream creation failed |
Error Handling Pattern
try {
const response = await handler.handleRequest(req);
return new Response(response.body, {
status: response.status,
headers: response.headers,
});
} catch (error) {
if (error instanceof FrontendHandlerError) {
return Response.json({ error: error.message, code: error.code }, { status: error.statusCode });
}
// Re-throw unexpected errors
throw error;
}Stream Resumability
The handler supports SSE event IDs for stream resumability:
How It Works
- Each chunk gets a sequence number from the stream manager
- Sequence numbers become SSE
id:fields - On disconnect, browser reconnects with
Last-Event-IDheader - Handler resumes from that position
Implementation
// Handler automatically handles resumability
const response = await handler.handleRequest({
method: 'GET',
streamId: 'run-123',
resumeAt: extractResumePosition(req.headers.get('Last-Event-ID')),
});
// Requires a stream manager that supports resumable readers
if (streamManager.createResumableReader) {
reader = await streamManager.createResumableReader(streamId, {
fromSequence: resumeAt,
});
}Stream Status Handling
// GET mode returns different status codes:
// 200 - Active stream with content
// 204 - No content (stream ended, not found, or empty)
// 410 - Stream failed (Gone)Complete Example
import { createFrontendHandler, FrontendHandlerError } from '@helix-agents/ai-sdk';
import { JSAgentExecutor } from '@helix-agents/runtime-js';
import { InMemoryStateStore, InMemoryStreamManager } from '@helix-agents/store-memory';
import { VercelAIAdapter } from '@helix-agents/llm-vercel';
import { defineAgent } from '@helix-agents/core';
import { openai } from '@ai-sdk/openai';
import { z } from 'zod';
// Define agent
const ChatAgent = defineAgent({
name: 'chat',
systemPrompt: 'You are a helpful assistant.',
outputSchema: z.object({
response: z.string(),
}),
llmConfig: {
model: openai('gpt-4o'),
},
});
// Create executor
const stateStore = new InMemoryStateStore();
const streamManager = new InMemoryStreamManager();
const executor = new JSAgentExecutor(stateStore, streamManager, new VercelAIAdapter());
// Create handler
const handler = createFrontendHandler({
streamManager,
executor,
agent: ChatAgent,
stateStore,
});
// Use with Hono
import { Hono } from 'hono';
const app = new Hono();
app.post('/api/chat', async (c) => {
try {
const body = await c.req.json();
const response = await handler.handleRequest({
method: 'POST',
body: { message: body.message },
});
return new Response(response.body, {
status: response.status,
headers: response.headers,
});
} catch (error) {
if (error instanceof FrontendHandlerError) {
return c.json({ error: error.message, code: error.code }, error.statusCode);
}
throw error;
}
});
// Load messages for conversation restore
app.get('/api/messages/:runId', async (c) => {
const runId = c.req.param('runId');
const { messages, hasMore } = await handler.getMessages(runId);
return c.json({ messages, hasMore });
});Next Steps
- React Integration - Building React chat UIs
- Framework Examples - Express, Hono setup
- Streaming Guide - Helix streaming deep dive