-
-
Notifications
You must be signed in to change notification settings - Fork 1.1k
docs: ai chat.task #3226
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Draft
ericallam
wants to merge
1
commit into
main
Choose a base branch
from
docs/tri-7532-ai-sdk-chat-transport-and-chat-task-system
base: main
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
+2,363
−0
Draft
docs: ai chat.task #3226
Changes from all commits
Commits
File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
Large diffs are not rendered by default.
Oops, something went wrong.
Large diffs are not rendered by default.
Oops, something went wrong.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,234 @@ | ||
| --- | ||
| title: "Frontend" | ||
| sidebarTitle: "Frontend" | ||
| description: "Transport setup, session management, client data, and frontend patterns for AI Chat." | ||
| --- | ||
|
|
||
| ## Transport setup | ||
|
|
||
| Use the `useTriggerChatTransport` hook from `@trigger.dev/sdk/chat/react` to create a memoized transport instance, then pass it to `useChat`: | ||
|
|
||
| ```tsx | ||
| import { useTriggerChatTransport } from "@trigger.dev/sdk/chat/react"; | ||
| import { useChat } from "@ai-sdk/react"; | ||
| import type { myChat } from "@/trigger/chat"; | ||
| import { getChatToken } from "@/app/actions"; | ||
|
|
||
| export function Chat() { | ||
| const transport = useTriggerChatTransport<typeof myChat>({ | ||
| task: "my-chat", | ||
| accessToken: getChatToken, | ||
| }); | ||
|
|
||
| const { messages, sendMessage, stop, status } = useChat({ transport }); | ||
| // ... render UI | ||
| } | ||
| ``` | ||
|
|
||
| The transport is created once on first render and reused across re-renders. Pass a type parameter for compile-time validation of the task ID. | ||
|
|
||
| <Tip> | ||
| The hook keeps `onSessionChange` up to date via a ref internally, so you don't need to memoize the callback or worry about stale closures. | ||
| </Tip> | ||
|
|
||
| ### Dynamic access tokens | ||
|
|
||
| For token refresh, pass a function instead of a string. It's called on each `sendMessage`: | ||
|
|
||
| ```ts | ||
| const transport = useTriggerChatTransport({ | ||
| task: "my-chat", | ||
| accessToken: async () => { | ||
| const res = await fetch("/api/chat-token"); | ||
| return res.text(); | ||
| }, | ||
| }); | ||
| ``` | ||
|
|
||
| ## Session management | ||
|
|
||
| ### Session cleanup (frontend) | ||
|
|
||
| Since session creation and updates are handled server-side, the frontend only needs to handle session deletion when a run ends: | ||
|
|
||
| ```tsx | ||
| const transport = useTriggerChatTransport<typeof myChat>({ | ||
| task: "my-chat", | ||
| accessToken: getChatToken, | ||
| sessions: loadedSessions, // Restored from DB on page load | ||
| onSessionChange: (chatId, session) => { | ||
| if (!session) { | ||
| deleteSession(chatId); // Server action — run ended | ||
| } | ||
| }, | ||
| }); | ||
| ``` | ||
|
|
||
| ### Restoring on page load | ||
|
|
||
| On page load, fetch both the messages and the session from your database, then pass them to `useChat` and the transport. Pass `resume: true` to `useChat` when there's an existing conversation — this tells the AI SDK to reconnect to the stream via the transport. | ||
|
|
||
| ```tsx app/page.tsx | ||
| "use client"; | ||
|
|
||
| import { useEffect, useState } from "react"; | ||
| import { useTriggerChatTransport } from "@trigger.dev/sdk/chat/react"; | ||
| import { useChat } from "@ai-sdk/react"; | ||
| import { getChatToken, getChatMessages, getSession, deleteSession } from "@/app/actions"; | ||
|
|
||
| export default function ChatPage({ chatId }: { chatId: string }) { | ||
| const [initialMessages, setInitialMessages] = useState([]); | ||
| const [initialSession, setInitialSession] = useState(undefined); | ||
| const [loaded, setLoaded] = useState(false); | ||
|
|
||
| useEffect(() => { | ||
| async function load() { | ||
| const [messages, session] = await Promise.all([ | ||
| getChatMessages(chatId), | ||
| getSession(chatId), | ||
| ]); | ||
| setInitialMessages(messages); | ||
| setInitialSession(session ? { [chatId]: session } : undefined); | ||
| setLoaded(true); | ||
| } | ||
| load(); | ||
| }, [chatId]); | ||
|
|
||
| if (!loaded) return null; | ||
|
|
||
| return ( | ||
| <ChatClient | ||
| chatId={chatId} | ||
| initialMessages={initialMessages} | ||
| initialSessions={initialSession} | ||
| /> | ||
| ); | ||
| } | ||
|
|
||
| function ChatClient({ chatId, initialMessages, initialSessions }) { | ||
| const transport = useTriggerChatTransport({ | ||
| task: "my-chat", | ||
| accessToken: getChatToken, | ||
| sessions: initialSessions, | ||
| onSessionChange: (id, session) => { | ||
| if (!session) deleteSession(id); | ||
| }, | ||
| }); | ||
|
|
||
| const { messages, sendMessage, stop, status } = useChat({ | ||
| id: chatId, | ||
| messages: initialMessages, | ||
| transport, | ||
| resume: initialMessages.length > 0, // Resume if there's an existing conversation | ||
| }); | ||
|
|
||
| // ... render UI | ||
| } | ||
| ``` | ||
|
|
||
| <Info> | ||
| `resume: true` causes `useChat` to call `reconnectToStream` on the transport when the component mounts. The transport uses the session's `lastEventId` to skip past already-seen stream events, so the frontend only receives new data. Only enable `resume` when there are existing messages — for brand new chats, there's nothing to reconnect to. | ||
| </Info> | ||
|
|
||
| <Warning> | ||
| In React strict mode (enabled by default in Next.js dev), you may see a `TypeError: Cannot read properties of undefined (reading 'state')` in the console when using `resume`. This is a [known bug in the AI SDK](https://github.com/vercel/ai/issues/8477) caused by React strict mode double-firing the resume effect. The error is caught internally and **does not affect functionality** — streaming and message display work correctly. It only appears in development and will not occur in production builds. | ||
| </Warning> | ||
|
|
||
| ## Client data and metadata | ||
|
|
||
| ### Transport-level client data | ||
|
|
||
| Set default client data on the transport that's included in every request. When the task uses `clientDataSchema`, this is type-checked to match: | ||
|
|
||
| ```ts | ||
| const transport = useTriggerChatTransport<typeof myChat>({ | ||
| task: "my-chat", | ||
| accessToken: getChatToken, | ||
| clientData: { userId: currentUser.id }, | ||
| }); | ||
| ``` | ||
|
|
||
| ### Per-message metadata | ||
|
|
||
| Pass metadata with individual messages via `sendMessage`. Per-message values are merged with transport-level client data (per-message wins on conflicts): | ||
|
|
||
| ```ts | ||
| sendMessage( | ||
| { text: "Hello" }, | ||
| { metadata: { model: "gpt-4o", priority: "high" } } | ||
| ); | ||
| ``` | ||
|
|
||
| ### Typed client data with clientDataSchema | ||
|
|
||
| Instead of manually parsing `clientData` with Zod in every hook, pass a `clientDataSchema` to `chat.task`. The schema validates the data once per turn, and `clientData` is typed in all hooks and `run`: | ||
|
|
||
| ```ts | ||
| import { chat } from "@trigger.dev/sdk/ai"; | ||
| import { streamText } from "ai"; | ||
| import { openai } from "@ai-sdk/openai"; | ||
| import { z } from "zod"; | ||
|
|
||
| export const myChat = chat.task({ | ||
| id: "my-chat", | ||
| clientDataSchema: z.object({ | ||
| model: z.string().optional(), | ||
| userId: z.string(), | ||
| }), | ||
| onChatStart: async ({ chatId, clientData }) => { | ||
| // clientData is typed as { model?: string; userId: string } | ||
| await db.chat.create({ | ||
| data: { id: chatId, userId: clientData.userId }, | ||
| }); | ||
| }, | ||
| run: async ({ messages, clientData, signal }) => { | ||
| // Same typed clientData — no manual parsing needed | ||
| return streamText({ | ||
| model: openai(clientData?.model ?? "gpt-4o"), | ||
| messages, | ||
| abortSignal: signal, | ||
| }); | ||
| }, | ||
| }); | ||
| ``` | ||
|
|
||
| The schema also types the `clientData` option on the frontend transport: | ||
|
|
||
| ```ts | ||
| // TypeScript enforces that clientData matches the schema | ||
| const transport = useTriggerChatTransport<typeof myChat>({ | ||
| task: "my-chat", | ||
| accessToken: getChatToken, | ||
| clientData: { userId: currentUser.id }, | ||
| }); | ||
| ``` | ||
|
|
||
| Supports Zod, ArkType, Valibot, and other schema libraries supported by the SDK. | ||
|
|
||
| ## Stop generation | ||
|
|
||
| Calling `stop()` from `useChat` sends a stop signal to the running task via input streams. The task aborts the current `streamText` call, but the run stays alive for the next message: | ||
|
|
||
| ```tsx | ||
| const { messages, sendMessage, stop, status } = useChat({ transport }); | ||
|
|
||
| {status === "streaming" && ( | ||
| <button type="button" onClick={stop}> | ||
| Stop | ||
| </button> | ||
| )} | ||
| ``` | ||
|
|
||
| See [Stop generation](/ai-chat/backend#stop-generation) in the backend docs for how to handle stop signals in your task. | ||
|
|
||
| ## Self-hosting | ||
|
|
||
| If you're self-hosting Trigger.dev, pass the `baseURL` option: | ||
|
|
||
| ```ts | ||
| const transport = useTriggerChatTransport({ | ||
| task: "my-chat", | ||
| accessToken, | ||
| baseURL: "https://your-trigger-instance.com", | ||
| }); | ||
| ``` | ||
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,161 @@ | ||
| --- | ||
| title: "AI Chat" | ||
| sidebarTitle: "Overview" | ||
| description: "Run AI SDK chat completions as durable Trigger.dev tasks with built-in realtime streaming, multi-turn conversations, and message persistence." | ||
| --- | ||
|
|
||
| ## Overview | ||
|
|
||
| The `@trigger.dev/sdk` provides a custom [ChatTransport](https://sdk.vercel.ai/docs/ai-sdk-ui/transport) for the Vercel AI SDK's `useChat` hook. This lets you run chat completions as **durable Trigger.dev tasks** instead of fragile API routes — with automatic retries, observability, and realtime streaming built in. | ||
|
|
||
| **How it works:** | ||
| 1. The frontend sends messages via `useChat` through `TriggerChatTransport` | ||
| 2. The first message triggers a Trigger.dev task; subsequent messages resume the **same run** via input streams | ||
| 3. The task streams `UIMessageChunk` events back via Trigger.dev's realtime streams | ||
| 4. The AI SDK's `useChat` processes the stream natively — text, tool calls, reasoning, etc. | ||
| 5. Between turns, the run stays warm briefly then suspends (freeing compute) until the next message | ||
|
|
||
| No custom API routes needed. Your chat backend is a Trigger.dev task. | ||
|
|
||
| <Accordion title="How it works (sequence diagrams)"> | ||
|
|
||
| ### First message flow | ||
|
|
||
| ```mermaid | ||
| sequenceDiagram | ||
| participant User | ||
| participant useChat as useChat + Transport | ||
| participant API as Trigger.dev API | ||
| participant Task as chat.task Worker | ||
| participant LLM as LLM Provider | ||
|
|
||
| User->>useChat: sendMessage("Hello") | ||
| useChat->>useChat: No session for chatId → trigger new run | ||
| useChat->>API: triggerTask(payload, tags: [chat:id]) | ||
| API-->>useChat: { runId, publicAccessToken } | ||
| useChat->>useChat: Store session, subscribe to SSE | ||
|
|
||
| API->>Task: Start run with ChatTaskWirePayload | ||
| Task->>Task: onChatStart({ chatId, messages, clientData }) | ||
| Task->>Task: onTurnStart({ chatId, messages }) | ||
| Task->>LLM: streamText({ model, messages, abortSignal }) | ||
| LLM-->>Task: Stream response chunks | ||
| Task->>API: streams.pipe("chat", uiStream) | ||
| API-->>useChat: SSE: UIMessageChunks | ||
| useChat-->>User: Render streaming text | ||
| Task->>API: Write __trigger_turn_complete | ||
| API-->>useChat: SSE: turn complete + refreshed token | ||
| useChat->>useChat: Close stream, update session | ||
| Task->>Task: onTurnComplete({ messages, stopped: false }) | ||
| Task->>Task: Wait for next message (warm → suspend) | ||
| ``` | ||
|
|
||
| ### Multi-turn flow | ||
|
|
||
| ```mermaid | ||
| sequenceDiagram | ||
| participant User | ||
| participant useChat as useChat + Transport | ||
| participant API as Trigger.dev API | ||
| participant Task as chat.task Worker | ||
| participant LLM as LLM Provider | ||
|
|
||
| Note over Task: Suspended, waiting for message | ||
|
|
||
| User->>useChat: sendMessage("Tell me more") | ||
| useChat->>useChat: Session exists → send via input stream | ||
| useChat->>API: sendInputStream(runId, "chat-messages", payload) | ||
| Note right of useChat: Only sends new message (not full history) | ||
|
|
||
| API->>Task: Deliver to messagesInput | ||
| Task->>Task: Wake from suspend | ||
| Task->>Task: Append to accumulated messages | ||
| Task->>Task: onTurnStart({ turn: 1 }) | ||
| Task->>LLM: streamText({ messages: [all accumulated] }) | ||
| LLM-->>Task: Stream response | ||
| Task->>API: streams.pipe("chat", uiStream) | ||
| API-->>useChat: SSE: UIMessageChunks | ||
| useChat-->>User: Render streaming text | ||
| Task->>API: Write __trigger_turn_complete | ||
| Task->>Task: onTurnComplete({ turn: 1 }) | ||
| Task->>Task: Wait for next message (warm → suspend) | ||
| ``` | ||
|
|
||
| ### Stop signal flow | ||
|
|
||
| ```mermaid | ||
| sequenceDiagram | ||
| participant User | ||
| participant useChat as useChat + Transport | ||
| participant API as Trigger.dev API | ||
| participant Task as chat.task Worker | ||
| participant LLM as LLM Provider | ||
|
|
||
| Note over Task: Streaming response... | ||
|
|
||
| User->>useChat: Click "Stop" | ||
| useChat->>API: sendInputStream(runId, "chat-stop", { stop: true }) | ||
| API->>Task: Deliver to stopInput | ||
| Task->>Task: stopController.abort() | ||
| LLM-->>Task: Stream ends (AbortError) | ||
| Task->>Task: cleanupAbortedParts(responseMessage) | ||
| Note right of Task: Remove partial tool calls,<br/>mark streaming parts as done | ||
| Task->>API: Write __trigger_turn_complete | ||
| API-->>useChat: SSE: turn complete | ||
| Task->>Task: onTurnComplete({ stopped: true }) | ||
| Task->>Task: Wait for next message | ||
| ``` | ||
|
|
||
| </Accordion> | ||
|
|
||
| <Note> | ||
| Requires `@trigger.dev/sdk` version **4.4.0 or later** and the `ai` package **v5.0.0 or later**. | ||
| </Note> | ||
|
|
||
| ## How multi-turn works | ||
|
|
||
| ### One run, many turns | ||
|
|
||
| The entire conversation lives in a **single Trigger.dev run**. After each AI response, the run waits for the next message via input streams. The frontend transport handles this automatically — it triggers a new run for the first message, and sends subsequent messages to the existing run. | ||
|
|
||
| This means your conversation has full observability in the Trigger.dev dashboard: every turn is a span inside the same run. | ||
|
|
||
| ### Warm and suspended states | ||
|
|
||
| After each turn, the run goes through two phases of waiting: | ||
|
|
||
| 1. **Warm phase** (default 30s) — The run stays active and responds instantly to the next message. Uses compute. | ||
| 2. **Suspended phase** (default up to 1h) — The run suspends, freeing compute. It wakes when the next message arrives. There's a brief delay as the run resumes. | ||
|
|
||
| If no message arrives within the turn timeout, the run ends gracefully. The next message from the frontend will automatically start a fresh run. | ||
|
|
||
| <Info> | ||
| You are not charged for compute during the suspended phase. Only the warm phase uses compute resources. | ||
| </Info> | ||
|
|
||
| ### What the backend accumulates | ||
|
|
||
| The backend automatically accumulates the full conversation history across turns. After the first turn, the frontend transport only sends the new user message — not the entire history. This is handled transparently by the transport and task. | ||
|
|
||
| The accumulated messages are available in: | ||
| - `run()` as `messages` (`ModelMessage[]`) — for passing to `streamText` | ||
| - `onTurnStart()` as `uiMessages` (`UIMessage[]`) — for persisting before streaming | ||
| - `onTurnComplete()` as `uiMessages` (`UIMessage[]`) — for persisting after the response | ||
|
|
||
| ## Three approaches | ||
|
|
||
| There are three ways to build the backend, from most opinionated to most flexible: | ||
|
|
||
| | Approach | Use when | What you get | | ||
| |----------|----------|--------------| | ||
| | [chat.task()](/ai-chat/backend#chattask) | Most apps | Auto-piping, lifecycle hooks, message accumulation, stop handling | | ||
| | [chat.createSession()](/ai-chat/backend#chatcreatesession) | Need a loop but not hooks | Async iterator with per-turn helpers, message accumulation, stop handling | | ||
| | [Raw task + primitives](/ai-chat/backend#raw-task-with-primitives) | Full control | Manual control of every step — use `chat.messages`, `chat.createStopSignal()`, etc. | | ||
|
|
||
| ## Related | ||
|
|
||
| - [Quick Start](/ai-chat/quick-start) — Get a working chat in 3 steps | ||
| - [Backend](/ai-chat/backend) — Backend approaches in detail | ||
| - [Frontend](/ai-chat/frontend) — Transport setup, sessions, client data | ||
| - [Features](/ai-chat/features) — Per-run data, deferred work, streaming, subtasks | ||
| - [API Reference](/ai-chat/reference) — Complete reference tables |
Oops, something went wrong.
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🌐 Web query:
In Next.js App Router, what props does a default export inapp/page.tsxreceive, and how should a page component access a dynamicchatId?💡 Result:
In the App Router, the default export in
app/page.tsx(a Page component) does not receive arbitrary “props” likepageProps. It only receives route-derived props:searchParams(query string params) — and in Next.js 15, it’s passed as a Promise. [1], [2]params(dynamic route segments) — only if the route has dynamic segments; in Next.js 15, it’s also a Promise. [1], [3]To access a dynamic
chatId, make the route dynamic (e.g.app/chat/[chatId]/page.tsx) and read it fromparams, e.g.:This is the App Router pattern for dynamic segments (
params) rather than expectingchatIdas a normal prop. [1], [3]Sources: [1] https://nextjs.org/docs/messages/next-prerender-sync-params, [2] https://nextjs.org/learn/dashboard-app/adding-search-and-pagination, [3] https://nextjs.org/docs/app/api-reference/file-conventions/dynamic-routes
Show a runnable
app/page.tsxexample that readschatIdfrom dynamic route params.This snippet is labeled
app/page.tsxbut expectschatIdas a custom prop. In App Router, page components receive onlyparams(for dynamic route segments) andsearchParams(from query strings), not arbitrary props. Move this to a non-page client component file (e.g.,components/ChatClient.tsx), or rewrite it as an actual page that readsparams.chatIdfrom a route likeapp/chat/[chatId]/page.tsx.🤖 Prompt for AI Agents