@helix-agents/llm-vercel
Vercel AI SDK adapter for the Helix Agents framework. Provides LLM integration using the Vercel AI SDK's streamText function.
Installation
bash
npm install @helix-agents/llm-vercel ai @ai-sdk/openaiVercelAIAdapter
Main adapter class implementing the LLMAdapter interface.
typescript
import { VercelAIAdapter } from '@helix-agents/llm-vercel';
const llmAdapter = new VercelAIAdapter();generateStep
Generate a single agent step with the LLM.
typescript
const result = await llmAdapter.generateStep({
messages, // Conversation history
tools, // Available tools
llmConfig, // Model configuration
outputSchema, // Optional: for structured output
}, {
// Optional callbacks for streaming
onTextDelta: async (delta) => { ... },
onThinking: async (content, isComplete) => { ... },
onToolStart: async (id, name, args) => { ... },
onToolEnd: async (id, result) => { ... },
});Parameters:
typescript
interface LLMGenerateInput<TOutput> {
messages: Message[];
tools: LLMTool[];
llmConfig: LLMConfig;
outputSchema?: ZodType<TOutput>;
}
interface LLMStreamCallbacks {
onTextDelta?: (delta: string) => Promise<void>;
onThinking?: (content: string, isComplete: boolean) => Promise<void>;
onToolStart?: (id: string, name: string, args: unknown) => Promise<void>;
onToolEnd?: (id: string, result: unknown) => Promise<void>;
}Returns: StepResult<TOutput>
formatAssistantMessage
Format assistant message plan to internal Message format.
typescript
const message = llmAdapter.formatAssistantMessage({
content: 'Hello!',
toolCalls: [],
subAgentCalls: [],
thinking: { type: 'text', text: 'thinking...' },
});LLMConfig
Model configuration passed to the adapter.
typescript
import { openai } from '@ai-sdk/openai';
import { anthropic } from '@ai-sdk/anthropic';
const config: LLMConfig = {
// Required: Model from AI SDK
model: openai('gpt-4o'),
// Optional: Temperature (0-2)
temperature: 0.7,
// Optional: Max output tokens
maxOutputTokens: 4096,
// Optional: Provider-specific options
providerOptions: {
// For OpenAI o-series models
openai: {
reasoningEffort: 'medium',
},
// For Anthropic with extended thinking
anthropic: {
thinking: {
type: 'enabled',
budgetTokens: 10000,
},
},
},
};Supported Providers
Any provider supported by Vercel AI SDK:
typescript
// OpenAI
import { openai } from '@ai-sdk/openai';
const model = openai('gpt-4o');
// Anthropic
import { anthropic } from '@ai-sdk/anthropic';
const model = anthropic('claude-sonnet-4-20250514');
// Google
import { google } from '@ai-sdk/google';
const model = google('gemini-1.5-pro');
// Azure OpenAI
import { azure } from '@ai-sdk/azure';
const model = azure('gpt-4');
// Cohere
import { cohere } from '@ai-sdk/cohere';
const model = cohere('command-r-plus');Thinking/Reasoning Support
Anthropic Extended Thinking
typescript
const agent = defineAgent({
llmConfig: {
model: anthropic('claude-sonnet-4-20250514'),
providerOptions: {
anthropic: {
thinking: {
type: 'enabled',
budgetTokens: 10000,
},
},
},
},
});Thinking content is:
- Streamed via
onThinkingcallback - Included in
StepResult.thinking - Stored in
AssistantMessage.thinking
OpenAI o-series Reasoning
typescript
const agent = defineAgent({
llmConfig: {
model: openai('o1'),
providerOptions: {
openai: {
reasoningEffort: 'high', // 'low' | 'medium' | 'high'
},
},
},
});Chunk Mapping Utilities
Convert Vercel AI SDK chunks to Helix stream chunks.
typescript
import {
mapVercelChunkToStreamChunk,
isTextContentChunk,
isToolChunk,
isCompletionChunk,
} from '@helix-agents/llm-vercel';
// Check chunk type
if (isTextContentChunk(vercelChunk)) {
const helixChunk = mapVercelChunkToStreamChunk(vercelChunk, {
agentId: 'run-123',
timestamp: Date.now(),
});
}Usage Example
typescript
import { VercelAIAdapter } from '@helix-agents/llm-vercel';
import { JSAgentExecutor } from '@helix-agents/runtime-js';
import { InMemoryStateStore, InMemoryStreamManager } from '@helix-agents/store-memory';
import { defineAgent, defineTool } from '@helix-agents/core';
import { openai } from '@ai-sdk/openai';
import { z } from 'zod';
// Define agent
const MyAgent = defineAgent({
name: 'my-agent',
systemPrompt: 'You are a helpful assistant.',
outputSchema: z.object({ response: z.string() }),
tools: [
defineTool({
name: 'search',
description: 'Search for information',
inputSchema: z.object({ query: z.string() }),
outputSchema: z.object({ results: z.array(z.string()) }),
execute: async ({ query }) => ({ results: [`Result for: ${query}`] }),
}),
],
llmConfig: {
model: openai('gpt-4o'),
temperature: 0.7,
maxOutputTokens: 4096,
},
});
// Create executor with Vercel adapter
const executor = new JSAgentExecutor({
stateStore: new InMemoryStateStore(),
streamManager: new InMemoryStreamManager(),
llmAdapter: new VercelAIAdapter(),
});
// Execute
const handle = await executor.execute(MyAgent, 'Search for TypeScript tutorials');
const result = await handle.result();Error Handling
The adapter wraps Vercel AI SDK errors:
typescript
try {
const result = await llmAdapter.generateStep(input);
} catch (error) {
// Vercel AI SDK errors are passed through
console.error('LLM error:', error);
}Stop Reason Mapping
The adapter normalizes finish reasons to StopReason:
| Provider Reason | Helix StopReason |
|---|---|
stop | end_turn |
end_turn | end_turn |
tool-calls | tool_use |
tool_use | tool_use |
length | max_tokens |
max_tokens | max_tokens |
content-filter | content_filter |
| Other | unknown |