@helix-agents/llm-vercel
Vercel AI SDK adapter for the Helix Agents framework. Provides LLM integration using the Vercel AI SDK's streamText function.
Installation
npm install @helix-agents/llm-vercel ai @ai-sdk/openaiVercelAIAdapter
Main adapter class implementing the LLMAdapter interface.
import { VercelAIAdapter } from '@helix-agents/llm-vercel';
const llmAdapter = new VercelAIAdapter();generateStep
Generate a single agent step with the LLM.
const result = await llmAdapter.generateStep({
messages, // Conversation history
tools, // Available tools
llmConfig, // Model configuration
outputSchema, // Optional: for structured output
}, {
// Optional callbacks for streaming
onTextDelta: async (delta) => { ... },
onThinking: async (content, isComplete) => { ... },
onToolStart: async (id, name, args) => { ... },
onToolEnd: async (id, result) => { ... },
});Parameters:
interface LLMGenerateInput<TOutput> {
messages: Message[];
tools: LLMTool[];
llmConfig: LLMConfig;
outputSchema?: ZodType<TOutput>;
}
interface LLMStreamCallbacks {
onTextDelta?: (delta: string) => Promise<void>;
onThinking?: (content: string, isComplete: boolean) => Promise<void>;
onToolStart?: (id: string, name: string, args: unknown) => Promise<void>;
onToolEnd?: (id: string, result: unknown) => Promise<void>;
}Returns: StepResult<TOutput>
formatAssistantMessage
Format assistant message plan to internal Message format.
const message = llmAdapter.formatAssistantMessage({
content: 'Hello!',
toolCalls: [],
subAgentCalls: [],
thinking: { type: 'text', text: 'thinking...' },
});LLMConfig
Model configuration passed to the adapter.
import { openai } from '@ai-sdk/openai';
import { anthropic } from '@ai-sdk/anthropic';
const config: LLMConfig = {
// Required: Model from AI SDK
model: openai('gpt-4o'),
// Optional: Temperature (0-2)
temperature: 0.7,
// Optional: Max output tokens
maxOutputTokens: 4096,
// Optional: Prompt caching
caching: 'auto', // Automatic provider-specific cache optimization
// Optional: Provider-specific options
providerOptions: {
// For OpenAI o-series models
openai: {
reasoningEffort: 'medium',
},
// For Anthropic with extended thinking
anthropic: {
thinking: {
type: 'enabled',
budgetTokens: 10000,
},
},
},
};Supported Providers
Any provider supported by Vercel AI SDK:
// OpenAI
import { openai } from '@ai-sdk/openai';
const model = openai('gpt-4o');
// Anthropic
import { anthropic } from '@ai-sdk/anthropic';
const model = anthropic('claude-sonnet-4-20250514');
// Google
import { google } from '@ai-sdk/google';
const model = google('gemini-1.5-pro');
// Azure OpenAI
import { azure } from '@ai-sdk/azure';
const model = azure('gpt-4');
// Cohere
import { cohere } from '@ai-sdk/cohere';
const model = cohere('command-r-plus');Thinking/Reasoning Support
Anthropic Extended Thinking
const agent = defineAgent({
llmConfig: {
model: anthropic('claude-sonnet-4-20250514'),
providerOptions: {
anthropic: {
thinking: {
type: 'enabled',
budgetTokens: 10000,
},
},
},
},
});Thinking content is:
- Streamed via
onThinkingcallback - Included in
StepResult.thinking - Stored in
AssistantMessage.thinking
OpenAI o-series Reasoning
const agent = defineAgent({
llmConfig: {
model: openai('o1'),
providerOptions: {
openai: {
reasoningEffort: 'high', // 'low' | 'medium' | 'high'
},
},
},
});Chunk Mapping Utilities
Convert Vercel AI SDK chunks to Helix stream chunks.
import {
mapVercelChunkToStreamChunk,
isTextContentChunk,
isToolChunk,
isCompletionChunk,
} from '@helix-agents/llm-vercel';
// Check chunk type
if (isTextContentChunk(vercelChunk)) {
const helixChunk = mapVercelChunkToStreamChunk(vercelChunk, {
agentId: 'run-123',
timestamp: Date.now(),
});
}Usage Example
import { VercelAIAdapter } from '@helix-agents/llm-vercel';
import { JSAgentExecutor } from '@helix-agents/runtime-js';
import { InMemoryStateStore, InMemoryStreamManager } from '@helix-agents/store-memory';
import { defineAgent, defineTool } from '@helix-agents/core';
import { openai } from '@ai-sdk/openai';
import { z } from 'zod';
// Define agent
const MyAgent = defineAgent({
name: 'my-agent',
systemPrompt: 'You are a helpful assistant.',
outputSchema: z.object({ response: z.string() }),
tools: [
defineTool({
name: 'search',
description: 'Search for information',
inputSchema: z.object({ query: z.string() }),
outputSchema: z.object({ results: z.array(z.string()) }),
execute: async ({ query }) => ({ results: [`Result for: ${query}`] }),
}),
],
llmConfig: {
model: openai('gpt-4o'),
temperature: 0.7,
maxOutputTokens: 4096,
},
});
// Create executor with Vercel adapter
const executor = new JSAgentExecutor({
stateStore: new InMemoryStateStore(),
streamManager: new InMemoryStreamManager(),
llmAdapter: new VercelAIAdapter(),
});
// Execute
const handle = await executor.execute(MyAgent, 'Search for TypeScript tutorials');
const result = await handle.result();Error Handling
The adapter classifies Vercel AI SDK errors into typed HelixError instances.
mapVercelError
Convert Vercel AI SDK errors to typed HelixError:
import { mapVercelError, mapStatusCodeToErrorCode } from '@helix-agents/llm-vercel';
const helixError = mapVercelError(vercelError);
// Returns HelixError with code, category, retryable, statusCodeHandles ApiCallError (status code mapping), RetryError (extracts last error), and generic errors.
mapStatusCodeToErrorCode
Map HTTP status codes to error codes:
import { mapStatusCodeToErrorCode } from '@helix-agents/llm-vercel';
mapStatusCodeToErrorCode(429); // 'provider_rate_limited'
mapStatusCodeToErrorCode(503); // 'provider_overloaded'
mapStatusCodeToErrorCode(401); // 'provider_auth_error'| Status Code | ErrorCode |
|---|---|
| 401, 403 | provider_auth_error |
| 429 | provider_rate_limited |
| 408 | provider_timeout |
| 400, 422 | provider_invalid_request |
| 503, 529 | provider_overloaded |
| Other 5xx | provider_error |
Error Flow
When the LLM adapter encounters an error, the runtime's onError callback receives the classified HelixError and writes an ErrorChunk to the stream with code and recoverable fields. See Error Handling Guide for the complete flow.
Stop Reason Mapping
The adapter normalizes finish reasons to StopReason:
| Provider Reason | Helix StopReason |
|---|---|
stop | end_turn |
end_turn | end_turn |
tool-calls | tool_use |
tool_use | tool_use |
length | max_tokens |
max_tokens | max_tokens |
content-filter | content_filter |
| Other | unknown |