You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
docs(ai-chat): action turns become side-effect-only
Update the actions section of backend.mdx to reflect the new
chat.agent semantics: actions fire hydrateMessages + onAction only,
no turn lifecycle hooks, no run(). Documents the new return shapes
on onAction (void / StreamTextResult / string / UIMessage) and adds
a regenerate-from-here example for the model-response case.
Closes TRI-9118 (docs portion).
Custom actions let the frontend send structured commands (undo, rollback, edit) that modify the conversation state before the LLM responds. Actions use the same input stream as messages, so they wake the agent from suspension and trigger a full turn.
873
+
Custom actions let the frontend send structured commands (undo, rollback, edit) that modify the conversation state. **Actions are not turns** — they fire `hydrateMessages` (if set) and `onAction` only. No turn lifecycle hooks (`onTurnStart` / `prepareMessages` / `onBeforeTurnComplete` / `onTurnComplete`), no `run()`, no turn-counter increment. The trace span is named `chat action`.
874
874
875
875
Define an `actionSchema` for validation and an `onAction` handler that uses `chat.history` to modify state:
**Lifecycle flow:** Wake → parse action against `actionSchema` → `hydrateMessages` (if set) → **`onAction`** → apply `chat.history` mutations → emit `trigger:turn-complete` → wait for next message.
914
+
915
+
#### Returning a model response from an action
916
+
917
+
`onAction` can return a `StreamTextResult`, `string`, or `UIMessage` to produce a response. The returned stream is auto-piped to the frontend just like a normal turn — but the rest of the turn machinery (`onTurnStart`, `onTurnComplete`, etc.) still does not fire.
918
+
919
+
```ts
920
+
onAction: async ({ action, messages }) => {
921
+
if (action.type==="regenerate") {
922
+
chat.history.slice(0, -1); // drop the last assistant
923
+
returnstreamText({
924
+
model: openai("gpt-4o"),
925
+
messages,
926
+
});
927
+
}
928
+
// other actions return void → side-effect only
929
+
}
930
+
```
931
+
932
+
This is useful for actions that *both* mutate state and want a fresh model response (regenerate-from-here, retry-with-different-style). Persistence is your responsibility inside `onAction` itself — you have access to the streamed response object.
The action payload is validated against `actionSchema` on the backend — invalid actions throw and abort the turn. The `action` parameter in `onAction` is fully typed from the schema.
944
+
The action payload is validated against `actionSchema` on the backend — invalid actions throw and surface as a stream error. The `action` parameter in `onAction` is fully typed from the schema.
925
945
926
946
<Note>
927
-
Actions always trigger `run()` — the LLM responds to the modified state. For silent state changes that don't need a response (e.g. injecting background context), use [`chat.inject()`](/ai-chat/background-injection) instead.
947
+
For silent state changes that should never appear as a turn (e.g. injecting background context), use [`chat.inject()`](/ai-chat/background-injection) instead. Actions are explicit user-driven mutations; injections are agent-side context updates.
0 commit comments