Finishing Agents
Agents need a way to signal completion and return structured output. Helix provides two mechanisms for this: the auto-injected __finish__ tool and user-defined finishWith tools.
The Two Completion Mechanisms
__finish__ Tool (Auto-Injected)
When you define an agent with an outputSchema, Helix automatically injects a __finish__ tool:
const agent = defineAgent({
name: 'analyzer',
systemPrompt: 'Analyze the input and return results',
llmConfig: { model: openai('gpt-4o') },
outputSchema: z.object({
sentiment: z.enum(['positive', 'negative', 'neutral']),
confidence: z.number(),
}),
// __finish__ tool is auto-injected with schema from outputSchema
});The LLM calls __finish__ with data matching your outputSchema to complete the agent:
// LLM calls: __finish__({ sentiment: 'positive', confidence: 0.95 })
// Agent completes with output: { sentiment: 'positive', confidence: 0.95 }Characteristics of __finish__:
- Auto-generated from
outputSchema - No side effects (just captures output)
- Simple and straightforward
- Good for most use cases
finishWith Tools (User-Defined)
For cases where you need side effects when completing—like saving to a database, sending notifications, or performing validation—use finishWith tools:
const submitAnswerTool = defineTool({
name: 'submit_answer',
description: 'Submit the final answer after verification',
inputSchema: z.object({
answer: z.string(),
verified: z.boolean(),
}),
outputSchema: z.object({
result: z.string(),
submittedAt: z.string(),
}),
finishWith: true, // <-- This makes it a finishWith tool
execute: async (input, context) => {
// Side effects execute here
await saveToDatabase(input.answer);
await sendNotification(`Answer submitted: ${input.answer}`);
return {
result: input.answer,
submittedAt: new Date().toISOString(),
};
},
});Characteristics of finishWith tools:
- User-defined with custom logic
- Can perform side effects (API calls, DB writes, etc.)
- Execute function runs before completion
- Tool output becomes agent output (or is transformed)
Mutual Exclusivity
Important: When an agent has one or more finishWith tools, the __finish__ tool is NOT injected.
// Agent with finishWith tool - NO __finish__ injected
const agentWithFinishWith = defineAgent({
name: 'submission-agent',
tools: [submitAnswerTool], // finishWith: true
outputSchema: OutputSchema,
// Tools available to LLM: [submit_answer]
// __finish__ is NOT added
});
// Agent without finishWith tool - __finish__ IS injected
const agentWithoutFinishWith = defineAgent({
name: 'simple-agent',
tools: [searchTool], // finishWith: false (default)
outputSchema: OutputSchema,
// Tools available to LLM: [search, __finish__]
});This is intentional: if you define a finishWith tool, you want the LLM to use YOUR tool to complete, not the generic __finish__.
finishWithTransform
When your finishWith tool's output doesn't match the agent's outputSchema, use finishWithTransform to map the output:
const processDataTool = defineTool({
name: 'process_data',
description: 'Process and submit the data',
inputSchema: z.object({
rawData: z.string(),
multiplier: z.number().optional(),
}),
outputSchema: z.object({
// Tool returns this shape
rawData: z.string(),
multiplier: z.number().optional(),
processedAt: z.string(),
}),
finishWith: true,
finishWithTransform: (toolOutput) => ({
// Transform to agent's outputSchema
result: toolOutput.rawData.toUpperCase(),
score: toolOutput.multiplier ?? 1,
}),
execute: async (input) => {
// Process the data
return {
rawData: input.rawData,
multiplier: input.multiplier,
processedAt: new Date().toISOString(),
};
},
});
const agent = defineAgent({
name: 'processor',
tools: [processDataTool],
outputSchema: z.object({
// Agent output schema
result: z.string(),
score: z.number(),
}),
});Flow:
- LLM calls
process_data({ rawData: 'hello', multiplier: 5 }) execute()runs, returns{ rawData: 'hello', multiplier: 5, processedAt: '...' }finishWithTransform()maps to{ result: 'HELLO', score: 5 }- Agent completes with output
{ result: 'HELLO', score: 5 }
When to Use Each Approach
Use __finish__ (no finishWith tools) when:
- You just need structured output with no side effects
- The output comes directly from LLM reasoning
- You want the simplest setup
// Simple case: LLM analyzes and returns result
const analyzer = defineAgent({
name: 'analyzer',
systemPrompt: 'Analyze the text and determine sentiment',
outputSchema: z.object({
sentiment: z.enum(['positive', 'negative', 'neutral']),
reasoning: z.string(),
}),
// LLM will call __finish__({ sentiment: 'positive', reasoning: '...' })
});Use finishWith tools when:
- You need side effects on completion (save, send, validate)
- You want custom validation before completing
- You need to transform or enrich the output
- You want explicit control over the completion flow
// Complex case: Save results and notify
const submissionTool = defineTool({
name: 'submit_results',
description: 'Submit final results to the system',
inputSchema: z.object({
findings: z.array(z.string()),
confidence: z.number(),
}),
finishWith: true,
execute: async (input, context) => {
// Validate
if (input.confidence < 0.5) {
throw new Error('Confidence too low. Please gather more data.');
}
// Save to database
const id = await db.results.create({ data: input });
// Send notification
await notify(`Results submitted: ${id}`);
// Update state for logging
context.updateState<{ submittedAt: string }>((draft) => {
draft.submittedAt = new Date().toISOString();
});
return {
id,
...input,
};
},
});Multiple finishWith Tools
You can define multiple finishWith tools when there are different ways to complete:
const approveWithCommentsTool = defineTool({
name: 'approve_with_comments',
description: 'Approve the submission with reviewer comments',
inputSchema: z.object({
comments: z.string(),
}),
finishWith: true,
execute: async (input) => {
await updateStatus('approved');
return { status: 'approved', comments: input.comments };
},
});
const rejectTool = defineTool({
name: 'reject',
description: 'Reject the submission with reason',
inputSchema: z.object({
reason: z.string(),
}),
finishWith: true,
execute: async (input) => {
await updateStatus('rejected');
return { status: 'rejected', reason: input.reason };
},
});
const reviewer = defineAgent({
name: 'reviewer',
tools: [approveWithCommentsTool, rejectTool],
outputSchema: z.object({
status: z.enum(['approved', 'rejected']),
comments: z.string().optional(),
reason: z.string().optional(),
}),
});Parallel Execution: First Wins
If the LLM calls multiple finishWith tools in parallel, the first one (by array order) determines the output:
// LLM calls both in parallel:
// - approve_with_comments({ comments: 'Good work!' })
// - reject({ reason: 'Missing data' })
// Result: approve_with_comments wins (first in tool order)Error Handling
When finishWith Execute Throws
If a finishWith tool's execute function throws an error, the agent does NOT complete. The error is reported back to the LLM, which can try again or use a different approach:
const submitTool = defineTool({
name: 'submit',
inputSchema: z.object({ data: z.string() }),
finishWith: true,
execute: async (input) => {
if (input.data.length < 10) {
throw new Error('Data too short. Please provide more detail.');
}
return { result: input.data };
},
});
// LLM calls: submit({ data: 'Hi' })
// Error: "Data too short. Please provide more detail."
// LLM sees error and can call: submit({ data: 'A longer and more detailed response' })
// Agent completes successfullyWhen finishWithTransform Throws
If finishWithTransform throws, the agent fails:
const tool = defineTool({
name: 'submit',
finishWith: true,
finishWithTransform: (output) => {
if (!output.valid) {
throw new Error('Invalid output'); // Agent fails
}
return { result: output.data };
},
execute: async (input) => ({ data: input.data, valid: false }),
});Best Practice: Keep finishWithTransform pure and simple. Put validation logic in execute.
System Prompt Behavior
The framework automatically updates the system prompt based on which completion mechanism is available:
With __finish__:
## Output Requirement
This task requires structured output. You MUST complete your work by calling
the `__finish__` tool. This tool will process your output and complete the task.
DO NOT use any other method to return your final answer.With finishWith tool:
## Output Requirement
This task requires structured output. You MUST complete your work by calling
the `submit_answer` tool. This tool will process your output and complete the task.
DO NOT use any other method to return your final answer.The LLM is instructed to use the correct tool based on what's available.
State Mutations in finishWith Tools
State changes made in finishWith tools are persisted:
const submitTool = defineTool({
name: 'submit',
finishWith: true,
execute: async (input, context) => {
// This state change is saved
context.updateState<{ lastSubmission: string }>((draft) => {
draft.lastSubmission = input.data;
});
return { result: input.data };
},
});This is useful for:
- Recording completion metadata
- Tracking when/how the agent completed
- Enabling conversation continuation with context
Testing finishWith Tools
Unit Testing the Tool
import { describe, it, expect } from 'vitest';
describe('submitTool', () => {
it('should execute side effects and return output', async () => {
const mockContext = {
getState: () => ({}),
updateState: vi.fn(),
emit: vi.fn(),
abortSignal: new AbortController().signal,
};
const result = await submitTool.execute(
{ answer: 'test' },
mockContext as any
);
expect(result).toEqual({ result: 'test' });
});
});Integration Testing with MockLLM
import { MockLLMAdapter, defineAgent } from '@helix-agents/core';
import { JSAgentExecutor } from '@helix-agents/runtime-js';
import { InMemoryStateStore, InMemoryStreamManager } from '@helix-agents/store-memory';
describe('finishWith integration', () => {
it('should complete agent via finishWith tool', async () => {
const mockLLM = new MockLLMAdapter();
const executor = new JSAgentExecutor(
new InMemoryStateStore(),
new InMemoryStreamManager(),
mockLLM
);
// Configure mock to call finishWith tool
mockLLM.addResponse({
type: 'tool_calls',
toolCalls: [{
id: 'tool-1',
name: 'submit_answer',
arguments: { answer: 'The answer' },
}],
});
const handle = await executor.execute(agentWithFinishWith, 'Question');
const result = await handle.result();
expect(result.status).toBe('completed');
expect(result.output).toEqual({ result: 'The answer' });
// Verify __finish__ was NOT in tools
const input = mockLLM.getLastInput();
const toolNames = input.tools.map(t => t.name);
expect(toolNames).not.toContain('__finish__');
expect(toolNames).toContain('submit_answer');
});
});Complete Example
import { defineAgent, defineTool } from '@helix-agents/sdk';
import { openai } from '@ai-sdk/openai';
import { z } from 'zod';
// Output schema
const OutputSchema = z.object({
analysis: z.string(),
confidence: z.number(),
savedId: z.string().optional(),
});
// finishWith tool with side effects
const submitAnalysisTool = defineTool({
name: 'submit_analysis',
description: 'Submit the final analysis after validation',
inputSchema: z.object({
analysis: z.string().min(50, 'Analysis must be at least 50 characters'),
confidence: z.number().min(0).max(1),
saveToDb: z.boolean().default(true),
}),
outputSchema: z.object({
analysis: z.string(),
confidence: z.number(),
savedId: z.string().optional(),
}),
finishWith: true,
execute: async (input, context) => {
// Validation
if (input.confidence < 0.3) {
throw new Error('Confidence too low. Please gather more evidence.');
}
let savedId: string | undefined;
// Side effect: save to database
if (input.saveToDb) {
savedId = await database.analyses.create({
data: {
content: input.analysis,
confidence: input.confidence,
agentId: context.agentId,
},
});
// Emit event for streaming consumers
await context.emit('analysis_saved', { id: savedId });
}
// Update state for logging
context.updateState<{ lastSavedId: string | null }>((draft) => {
draft.lastSavedId = savedId ?? null;
});
return {
analysis: input.analysis,
confidence: input.confidence,
savedId,
};
},
});
// Regular research tool
const searchTool = defineTool({
name: 'search',
description: 'Search for information',
inputSchema: z.object({ query: z.string() }),
outputSchema: z.object({ results: z.array(z.string()) }),
execute: async (input) => {
const results = await performSearch(input.query);
return { results };
},
});
// Agent definition
const AnalysisAgent = defineAgent({
name: 'analysis-agent',
systemPrompt: `You are a research analyst.
Use the search tool to gather information.
When ready, call submit_analysis with your findings.
Ensure confidence is above 0.3 before submitting.`,
tools: [searchTool, submitAnalysisTool],
stateSchema: z.object({
lastSavedId: z.string().nullable().default(null),
}),
outputSchema: OutputSchema,
llmConfig: {
model: openai('gpt-4o'),
temperature: 0.7,
},
});Next Steps
- Defining Tools - Learn more about tool creation
- State Management - Manage state in finishWith tools
- Streaming - Stream events from finishWith tools
- Hooks - Use
afterToolhooks to observe finishWith execution