Status: In Design (Supported in Userland), Needs syntax change
Core Goal: Solve the N+1 problem at the engine level, allowing tools to process arrays of inputs without manual DataLoader setup.
1. The Current State: Userland Batching
Currently, developers can solve N+1 issues by injecting a DataLoader into the context. This works because the Bridge engine executes shadow trees concurrently.
How it works today:
- Instantiate: A
DataLoader is created per request and added to the context.
- Tool Call: The engine calls a tool 100 times for an array of 100 items.
- Intercept: The tool function receives the scalar input (e.g.,
{ id: "1" }) and manually calls loader.load(id).
- Resolve: The DataLoader batches these calls into one DB/API request.
2. The Future: Native batch Support
// Define the implementation
const fetchUsersImpl = async (calls) => {
const ids = calls.map(c => c.input.id);
const users = await db.users.findMany({ where: { id: { in: ids } } });
return ids.map(id => users.find(u => u.id === id));
};
// Attach the "Batch Contract" metadata
fetchUsersImpl.orchestration = {
mode: 'batch',
maxBatchSize: 100,
flushIntervalMs: 0 // Default to next-tick
};
export const fetchUsers = fetchUsersImpl;
The User Experience (Clean Implementation)
The developer no longer needs to know about "Loaders" or "Context." They simply write a function that handles an array of inputs.
3. Engine Implementation: The Batching Buffer
To support this, the ExecutionTree will be upgraded with a Microtask Buffer.
- Queueing: When
callTool hits a .batch = true tool, it generates a unique promise and pushes the input into a "Wait Room" for that tool.
- The Flush: The engine uses
process.nextTick or setImmediate to wait for the current execution cycle to finish gathering all parallel requests.
- Execution: The engine flushes the "Wait Room," calls the tool implementation once with the gathered array, and then resolves the 100 individual promises with the resulting data.
4. Comparison of Approaches
| Feature |
Userland (2.0) |
Native (2.1) |
| Tool Code |
Imperative (loader.load) |
Declarative (Pure Logic) |
| Setup |
Manual per-request context |
Automatic via Tool config metadata in the engine |
| Observability |
OTel sees 100 separate "Tool" spans |
OTel sees one "Batch Tool" span |
| Safety |
User must manage cache isolation |
Engine ensures per-request isolation |
Status: In Design (Supported in Userland), Needs syntax change
Core Goal: Solve the N+1 problem at the engine level, allowing tools to process arrays of inputs without manual DataLoader setup.
1. The Current State: Userland Batching
Currently, developers can solve N+1 issues by injecting a
DataLoaderinto thecontext. This works because the Bridge engine executes shadow trees concurrently.How it works today:
DataLoaderis created per request and added to the context.{ id: "1" }) and manually callsloader.load(id).2. The Future: Native batch Support
The User Experience (Clean Implementation)
The developer no longer needs to know about "Loaders" or "Context." They simply write a function that handles an array of inputs.
3. Engine Implementation: The Batching Buffer
To support this, the
ExecutionTreewill be upgraded with a Microtask Buffer.callToolhits a.batch = truetool, it generates a unique promise and pushes the input into a "Wait Room" for that tool.process.nextTickorsetImmediateto wait for the current execution cycle to finish gathering all parallel requests.4. Comparison of Approaches
loader.load)