In-Memory Workspace
The InMemoryWorkspace is the simplest workspace provider — files live in a JavaScript Map inside the executor process. No persistence, no I/O, no external dependencies.
When to use
- Tests. Smoke tests for agents that touch the filesystem, where you don't want to manage tmpdirs or container state.
- Dev loops. Quick iteration before deciding which real provider you'll deploy with.
- Ephemeral agents. Agents that don't need files to outlive the session (e.g., a one-shot summarizer that writes a draft, finishes, and discards).
If you need anything to persist past the executor's lifetime, use one of the other providers.
Capabilities supported
| Capability | Supported |
|---|---|
fs | ✅ |
shell | ❌ |
code | ❌ |
snapshot | ❌ |
The provider advertises only fs: true on its WorkspaceRef.capabilities. Declaring a capability marked ❌ above causes WorkspaceFailedError at session start (the framework asserts that config.capabilities ⊆ ref.capabilities and that each declared module is present on the returned workspace). See the error-model table on the workspaces overview for the full classification.
Install
npm install @helix-agents/workspace-memoryProvider config
interface InMemoryWorkspaceConfig {
kind: 'in-memory';
}No options. Discriminator only.
Provider options
interface InMemoryWorkspaceProviderOptions {
/** Optional Logger. Defaults to noopLogger. */
logger?: Logger;
/**
* When true, IDs use a deterministic monotonic counter only (`inmem-<n>`).
* Intended for tests that snapshot generated IDs. When false or absent
* (the production default), IDs additionally include a 6-byte random hex
* suffix to guarantee uniqueness across processes — different DOs / Node
* workers might otherwise generate the same numeric counter at the same
* wall-clock millisecond and collide on downstream lookup keys.
*/
deterministicIds?: boolean;
}The counter is instance-scoped (round-4 cluster D2) — two InMemoryWorkspaceProvider instances in the same process do NOT share a counter. Tests that construct fresh providers start each from inmem-1.
The 6-byte random suffix (round-4 cluster D10) closes a cross-process collision vector that the bare counter could not. Default OFF (=randomized) for production; flip to deterministicIds: true for tests that want stable snapshots.
Wiring
import { defineAgent } from '@helix-agents/core';
import { JSAgentExecutor } from '@helix-agents/runtime-js';
import { InMemoryStateStore, InMemoryStreamManager } from '@helix-agents/store-memory';
import { InMemoryWorkspaceProvider } from '@helix-agents/workspace-memory';
const agent = defineAgent({
name: 'my-agent',
llmConfig: { model: yourModel },
workspaces: {
notes: {
provider: { kind: 'in-memory' },
capabilities: { fs: true },
},
},
});
const executor = new JSAgentExecutor(
new InMemoryStateStore(),
new InMemoryStreamManager(),
yourLLMAdapter,
{
workspaceProviders: new Map([
['in-memory', new InMemoryWorkspaceProvider()],
]),
}
);The map key ('in-memory') must match the provider's providerId AND the discriminator kind you declared in the agent config.
Lifecycle
open()— creates a freshInMemoryWorkspace. Cheap; no I/O.resolve()— throws. In-memory workspaces are ephemeral; their state is lost on process restart. Don't use this provider with runtimes that need to survive restarts (Cloudflare DOs, Temporal workflows). UseCloudflareFileStoreWorkspaceorLocalBashWorkspaceinstead.close()— releases the in-memoryMap. The garbage collector reclaims it.
Observability
The provider accepts an optional Logger from @helix-agents/core so workspace-side events surface in your logging pipeline (pino, winston, console, etc.):
import { consoleLogger } from '@helix-agents/core';
new InMemoryWorkspaceProvider({ logger: consoleLogger });The in-memory provider has no security boundary today, so the logger is wired for symmetry with the other providers — it carries forward events the framework adds in future hardening passes (e.g. policy violations) without an API change. Defaults to silent (noopLogger).
Using the workspace from a custom tool
import { defineTool } from '@helix-agents/core';
import { z } from 'zod';
const dumpNotes = defineTool({
name: 'dump_notes',
parameters: z.object({}),
execute: async (_input, ctx) => {
const ws = await ctx.workspaces!.get('notes');
if (!ws.fs) throw new Error('notes workspace requires fs capability');
const entries = await ws.fs.ls('/');
return { count: entries.length, paths: entries.map((e) => e.path) };
},
});ctx.workspaces is optional on ToolContext — the ! is appropriate when you've configured workspaces on this agent. See the shared pattern on the overview page for the rationale.
Inspecting a workspace
State is process-local and disappears on restart, so the only way to inspect contents is from inside the process. Add a custom debug tool that lists the workspace via ws.fs!.ls('/') (or your preferred traversal), or expose it through your agent's stream events. There is no host-side artefact to peek at.
Mid-run inspection (active sessions)
You cannot inspect an in-memory workspace from outside the process — the state lives in a JavaScript Map reachable only through the agent's WorkspaceRegistry. The only mid-run-safe path is the custom debug tool described above. For after-completion, the same tool works (or just print to logs at session end).
Auto-injected tools
Every fs tool is offered to the LLM. See the FileSystem module page for full schemas.
workspace__<name>__read_file(path)workspace__<name>__write_file(path, content)workspace__<name>__edit_file(path, oldText, newText)workspace__<name>__ls(path)workspace__<name>__glob(pattern)workspace__<name>__grep(pattern, opts?)workspace__<name>__stat(path)workspace__<name>__mkdir(path, opts?)workspace__<name>__rm(path, opts?)
Capacity & performance
These are approximate ranges; benchmark for your workload.
| Dimension | Approximate range | Notes |
|---|---|---|
| Per-process bound | RAM-limited (typically MB to low-GB) | All workspace state lives in a single Map per workspace inside the executor process. |
| Typical small-file scale | ~10K small files / ~MB-scale | Fine for most agent flows; glob/grep are O(N) over the map. |
| FS op latency | Sub-millisecond | Pure JS, no I/O. |
| Cross-process sharing | None | Process-local. Two parallel agent runs do NOT share state. |
| Production multi-DO | Not recommended | State is lost on every process restart and not shared across DO instances. |
For anything beyond tests / dev / single-process ephemeral use, switch to a durable provider — see the table below.
Limitations + when to switch
| You need... | Switch to |
|---|---|
| File state to survive process restart | Local Bash (POSIX dev) or Cloudflare Filestore (CF prod) |
| Shell command execution | Local Bash (POSIX dev) or Cloudflare Sandbox (CF prod) |
| Code interpreter | Cloudflare Sandbox |
| Snapshot / branch | Cloudflare Sandbox |
| Files visible across multiple agent instances | None of the v1 providers — workspaces are session-scoped |