Research Assistant (JS Runtime)
This example demonstrates a research assistant agent using the JavaScript runtime. It shows:
- Custom state schema for tracking progress
- Multiple tools (web search, note-taking)
- Structured output schema
- Dynamic system prompts
- Real-time streaming
Source Code
The full example is in examples/research-assistant/.
Running the Example
bash
# Clone and install
cd examples/research-assistant
npm install
# Set your API key
export OPENAI_API_KEY=sk-xxx
# Run the demo
npm run demo
# Or with a custom topic
npm run demo "quantum computing applications"Project Structure
examples/research-assistant/
├── src/
│ ├── agent.ts # Agent definition
│ ├── types.ts # State and output schemas
│ ├── run.ts # Demo runner
│ └── tools/
│ ├── web-search.ts
│ └── take-notes.ts
└── package.jsonState Schema
The agent tracks research progress in custom state:
typescript
// src/types.ts
import { z } from 'zod';
export const ResearchStateSchema = z.object({
// The topic being researched
topic: z.string().default(''),
// Notes collected during research
notes: z
.array(
z.object({
content: z.string(),
source: z.string().optional(),
})
)
.default([]),
// Search results collected
searchResults: z
.array(
z.object({
title: z.string(),
snippet: z.string(),
url: z.string(),
})
)
.default([]),
});
export type ResearchState = z.infer<typeof ResearchStateSchema>;Key points:
- All fields have defaults (
.default([])) so agents can start with just a message - State tracks both search results and notes separately
- The topic is extracted from the user message
Output Schema
The structured output when research completes:
typescript
export const ResearchOutputSchema = z.object({
topic: z.string(),
keyFindings: z.array(z.string()),
sources: z.array(z.string()),
summary: z.string(),
});
export type ResearchOutput = z.infer<typeof ResearchOutputSchema>;Tools
Web Search
typescript
// src/tools/web-search.ts
import { z } from 'zod';
import { defineTool } from '@helix-agents/core';
export const webSearchTool = defineTool({
name: 'web_search',
description: 'Search the web for information on a topic.',
inputSchema: z.object({
query: z.string().describe('The search query'),
maxResults: z.number().optional().default(5),
}),
outputSchema: z.array(
z.object({
title: z.string(),
snippet: z.string(),
url: z.string(),
})
),
execute: async (input, context) => {
const { query, maxResults } = input;
// Emit custom event
await context.emit('search_started', { query });
// Mock search results (replace with real API)
const results = generateMockResults(query, maxResults);
await context.emit('search_completed', { resultCount: results.length });
return results;
},
});Take Notes
typescript
// src/tools/take-notes.ts
import { z } from 'zod';
import { defineTool } from '@helix-agents/core';
export const takeNotesTool = defineTool({
name: 'take_notes',
description: 'Record important findings for later reference.',
inputSchema: z.object({
content: z.string().describe('The note content'),
source: z.string().optional().describe('Source URL'),
}),
outputSchema: z.object({
success: z.boolean(),
noteCount: z.number(),
}),
execute: async (input, context) => {
const { content, source } = input;
// Update state with new note
context.updateState<ResearchState>((draft) => {
draft.notes.push({ content, source });
});
// Get current note count
const state = context.getState<ResearchState>();
return {
success: true,
noteCount: state.notes.length,
};
},
});Agent Definition
typescript
// src/agent.ts
import { openai } from '@ai-sdk/openai';
import { defineAgent } from '@helix-agents/core';
import { ResearchStateSchema, ResearchOutputSchema } from './types.js';
import { webSearchTool, takeNotesTool } from './tools/index.js';
export const ResearchAssistantAgent = defineAgent({
name: 'research-assistant',
description: 'Researches topics and produces structured summaries',
stateSchema: ResearchStateSchema,
outputSchema: ResearchOutputSchema,
tools: [webSearchTool, takeNotesTool],
// Dynamic system prompt based on state
systemPrompt: (state) => `You are a research assistant.
Current topic: ${state?.topic || 'Not yet specified'}
Tools available:
- web_search: Search for information
- take_notes: Record findings
Your task:
1. Use web_search to find information
2. Use take_notes to record key findings
3. Call __finish__ with your research output
Current notes: ${state?.notes?.length ?? 0}
Search results: ${state?.searchResults?.length ?? 0}
When done, provide:
- topic: The research topic
- keyFindings: Array of key findings
- sources: Source URLs
- summary: Comprehensive summary`,
llmConfig: {
model: openai('gpt-4o-mini'),
temperature: 0.7,
maxOutputTokens: 4096,
},
maxSteps: 20,
});Running the Agent
typescript
// src/run.ts
import {
JSAgentExecutor,
InMemoryStateStore,
InMemoryStreamManager,
} from '@helix-agents/runtime-js';
import { VercelAIAdapter } from '@helix-agents/llm-vercel';
import { ResearchAssistantAgent } from './agent.js';
async function main() {
const topic = process.argv[2] || 'TypeScript for large codebases';
// Create infrastructure
const stateStore = new InMemoryStateStore();
const streamManager = new InMemoryStreamManager();
const llmAdapter = new VercelAIAdapter();
// Create executor
const executor = new JSAgentExecutor(stateStore, streamManager, llmAdapter);
// Execute agent
const handle = await executor.execute(ResearchAssistantAgent, topic);
console.log(`Run ID: ${handle.runId}`);
// Stream results
const stream = await handle.stream();
if (stream) {
for await (const chunk of stream) {
switch (chunk.type) {
case 'text_delta':
process.stdout.write(chunk.delta);
break;
case 'tool_start':
console.log(`\n[Tool: ${chunk.toolName}] Starting...`);
break;
case 'tool_end':
console.log(`[Tool: ${chunk.toolName}] Done`);
break;
case 'error':
console.error(`Error: ${chunk.error}`);
break;
}
}
}
// Get final result
const result = await handle.result();
if (result.status === 'completed' && result.output) {
console.log('\nResearch Output:');
console.log(`Topic: ${result.output.topic}`);
console.log(`Key Findings: ${result.output.keyFindings.join(', ')}`);
console.log(`Summary: ${result.output.summary}`);
}
}
main();Key Patterns
Dynamic System Prompts
The system prompt is a function that receives current state:
typescript
systemPrompt: (state) => `...
Current notes: ${state?.notes?.length ?? 0}
...`;This keeps the LLM informed of progress without requiring explicit state queries.
State Updates in Tools
Tools can read and update state:
typescript
execute: async (input, context) => {
// Read current state
const state = context.getState<ResearchState>();
// Update state (Immer draft)
context.updateState<ResearchState>((draft) => {
draft.notes.push({ content: input.content });
});
};Custom Events
Tools can emit events for monitoring:
typescript
await context.emit('search_started', { query });
// ... do work ...
await context.emit('search_completed', { resultCount });Structured Output
The outputSchema triggers automatic __finish__ tool injection:
typescript
outputSchema: ResearchOutputSchema,
// LLM will call __finish__({ topic, keyFindings, sources, summary })Testing
typescript
// src/__tests__/basic.test.ts
import { describe, it, expect } from 'vitest';
import { MockLLMAdapter } from '@helix-agents/core';
import { JSAgentExecutor } from '@helix-agents/runtime-js';
import { InMemoryStateStore, InMemoryStreamManager } from '@helix-agents/store-memory';
import { ResearchAssistantAgent } from '../agent.js';
describe('ResearchAssistant', () => {
it('should complete research', async () => {
const mock = new MockLLMAdapter([
// Step 1: Search
{
type: 'tool_calls',
toolCalls: [{ id: 'tc1', name: 'web_search', arguments: { query: 'AI' } }],
},
// Step 2: Take notes
{
type: 'tool_calls',
toolCalls: [{ id: 'tc2', name: 'take_notes', arguments: { content: 'AI is...' } }],
},
// Step 3: Finish
{
type: 'structured_output',
output: {
topic: 'AI',
keyFindings: ['Finding 1'],
sources: ['https://example.com'],
summary: 'AI summary',
},
},
]);
const executor = new JSAgentExecutor(
new InMemoryStateStore(),
new InMemoryStreamManager(),
mock
);
const handle = await executor.execute(ResearchAssistantAgent, 'Research AI');
const result = await handle.result();
expect(result.status).toBe('completed');
expect(result.output?.topic).toBe('AI');
});
});Next Steps
- Temporal Example - Durable execution
- Cloudflare Example - Edge deployment
- Custom Loop - Build your own executor