Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
9 changes: 9 additions & 0 deletions .changeset/cjs-output-and-json-response.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,9 @@
---
'@tanstack/ai': minor
'@tanstack/ai-client': minor
'@tanstack/ai-event-client': patch
---

**Dual ESM + CJS output.** `@tanstack/ai`, `@tanstack/ai-client`, and `@tanstack/ai-event-client` now ship both ESM and CJS builds with type-aware dual `exports` maps (`import` → `./dist/esm/*.js`, `require` → `./dist/cjs/*.cjs`), plus a `main` field pointing at CJS. Fixes Metro / Expo / CJS-only resolvers that previously couldn't find `@tanstack/ai/adapters` or `@tanstack/ai-client` because the packages were ESM-only (#308).

**New `toJSONResponse(stream, init?)` on `@tanstack/ai`.** Drains the chat stream fully and returns a JSON-array `Response` with `Content-Type: application/json`. Use on server runtimes that can't emit `ReadableStream` responses (Expo's `@expo/server`, some edge proxies). Pair with the new `fetchJSON(url, options?)` connection adapter on `@tanstack/ai-client` — it fetches the array and replays each chunk into the normal `ChatClient` pipeline. Trade-off: no incremental rendering (every chunk arrives at once when the request resolves). Closes #309.
16 changes: 16 additions & 0 deletions docs/api/ai-client.md
Original file line number Diff line number Diff line change
Expand Up @@ -166,6 +166,22 @@ import { fetchHttpStream } from "@tanstack/ai-client";
const adapter = fetchHttpStream("/api/chat");
```

### `fetchJSON(url, options?)`

Creates a connection adapter for non-streaming runtimes — pair with [`toJSONResponse`](./ai#tojsonresponsestream-init) on the server. The adapter POSTs `{ messages, data }`, expects a `StreamChunk[]` JSON body, and replays each chunk into the normal `ChatClient` pipeline.

```typescript
import { fetchJSON } from "@tanstack/ai-client";

const adapter = fetchJSON("/api/chat", {
headers: {
Authorization: "Bearer token",
},
});
```

Use this on Expo / React Native / edge proxies that can't emit `ReadableStream` responses. Trade-off: no incremental rendering — the UI sees every chunk at once when the request resolves. Full walkthrough: [React Native & Expo](../chat/non-streaming-runtimes).

### `stream(connectFn)`

Creates a custom connection adapter.
Expand Down
26 changes: 26 additions & 0 deletions docs/api/ai.md
Original file line number Diff line number Diff line change
Expand Up @@ -191,6 +191,32 @@ return toServerSentEventsResponse(stream);

A `Response` object suitable for HTTP endpoints with SSE headers (`Content-Type: text/event-stream`, `Cache-Control: no-cache`, `Connection: keep-alive`).

## `toJSONResponse(stream, init?)`

Drains the whole stream, then returns a JSON-array `Response` containing every `StreamChunk`. For runtimes that can't emit `ReadableStream` bodies (Expo's `@expo/server`, some edge proxies). Pair with [`fetchJSON`](./ai-client#fetchjsonurl-options) on the client.

```typescript
import { chat, toJSONResponse } from "@tanstack/ai";
import { openaiText } from "@tanstack/ai-openai";

const stream = chat({
adapter: openaiText("gpt-5.2"),
messages: [...],
});
return toJSONResponse(stream);
```

### Parameters

- `stream` - Async iterable of `StreamChunk`
- `init?` - Optional ResponseInit options (including `abortController`). Caller-provided headers are preserved; `Content-Type` defaults to `application/json`.

### Returns

A `Promise<Response>` with the stringified `StreamChunk[]` as the body. If the upstream stream throws mid-drain, a provided `abortController` is aborted and the error propagates.

> **Trade-off:** no incremental rendering — the UI sees every chunk at once when the request resolves. Use SSE / HTTP-stream responses when the runtime supports them. See [React Native & Expo](../chat/non-streaming-runtimes) for the full walkthrough.

## `maxIterations(count)`

Creates an agent loop strategy that limits iterations.
Expand Down
15 changes: 15 additions & 0 deletions docs/chat/connection-adapters.md
Original file line number Diff line number Diff line change
Expand Up @@ -81,6 +81,21 @@ const { messages } = useChat({
});
```

### JSON Array (non-streaming runtimes)

For runtimes that can't emit `ReadableStream` responses — Expo / React Native, some edge proxies, certain legacy serverless runtimes — pair `fetchJSON` on the client with [`toJSONResponse`](../api/ai#tojsonresponsestream-init) on the server:

```typescript
import { useChat } from "@tanstack/ai-react";
import { fetchJSON } from "@tanstack/ai-client";

const { messages } = useChat({
connection: fetchJSON("/api/chat"),
});
```

The server drains the whole chat stream before responding, and this adapter replays each chunk into the normal `ChatClient` pipeline. Trade-off: no incremental rendering — the UI sees every chunk at once when the request resolves. See [React Native & Expo](./non-streaming-runtimes) for the full walkthrough.

## Custom Adapters

For specialized use cases, you can create custom adapters to meet specific protocols or requirements:
Expand Down
95 changes: 95 additions & 0 deletions docs/chat/non-streaming-runtimes.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,95 @@
---
title: React Native & Expo
id: non-streaming-runtimes
order: 4
description: "Run TanStack AI on React Native, Expo, and other runtimes that can't emit ReadableStream responses — using toJSONResponse on the server and fetchJSON on the client."
keywords:
- tanstack ai
- react native
- expo
- expo router
- metro bundler
- non-streaming
- toJSONResponse
- fetchJSON
- edge runtime
---

You have a React Native or Expo app and you want to add AI chat, but the usual `toServerSentEventsResponse()` helper crashes on Expo's server runtime with:

```
TypeError: Cannot read properties of undefined (reading 'statusText')
```

…and Metro refuses to resolve `@tanstack/ai/adapters` at all. By the end of this guide, you'll have a working chat flow on Expo/React Native using a JSON-array fallback path. The same approach works for any deployment target that can't stream `ReadableStream` responses (some edge proxies, legacy serverless runtimes, etc.).

## What's actually going wrong

Two separate problems show up on React Native / Expo:

1. **Module resolution.** `@tanstack/ai` and `@tanstack/ai-client` ship dual ESM + CJS builds with `main`/`module`/`exports` all wired up. If your version is new enough, Metro resolves them out of the box. If you're stuck on an older version, upgrade — older releases were ESM-only and Metro can't consume them.

2. **Response shape.** Expo's `@expo/server` runtime (and a few edge proxies) can't emit a `ReadableStream` body, which is what `toServerSentEventsResponse` and `toHttpResponse` return. The request silently fails on the client side and `isLoading` flips back to `false` immediately.

The fix for (2) is to drain the chat stream on the server, send the collected chunks as a single JSON array, and replay them on the client. You lose incremental rendering — the UI sees every chunk at once when the request resolves — but every other piece of the chat pipeline keeps working as-is.

## Step 1: Return a JSON-array response on the server

Swap `toServerSentEventsResponse` for `toJSONResponse` in your API route. On Expo Router:

```typescript
// app/api/chat+api.ts
import { chat, toJSONResponse } from "@tanstack/ai";
import { openaiText } from "@tanstack/ai-openai";

export async function POST(request: Request) {
const { messages } = await request.json();

const stream = chat({
adapter: openaiText("gpt-5.2"),
messages,
});

return toJSONResponse(stream);
}
```

`toJSONResponse` iterates the whole stream, collects each `StreamChunk` into an array, and returns a plain `Response` with `Content-Type: application/json`. It accepts the same `init` options as `toServerSentEventsResponse` (including `abortController`) and honours any `Content-Type` you pass in `headers`.

## Step 2: Use `fetchJSON` as the connection adapter on the client

Swap `fetchServerSentEvents` for `fetchJSON` in your `useChat` call:

```typescript
import { useChat } from "@tanstack/ai-react";
import { fetchJSON } from "@tanstack/ai-client";

export function ChatScreen() {
const { messages, sendMessage, isLoading } = useChat({
connection: fetchJSON("/api/chat"),
});

// messages and isLoading behave identically to the streaming path —
// they just update all at once when the request resolves.
return <ChatUI messages={messages} onSend={sendMessage} busy={isLoading} />;
}
```

`fetchJSON` accepts the same `url` + `options` signature as the other connection adapters (static string or function, headers, credentials, custom `fetchClient`, extra body, abort signal). It POSTs the usual `{ messages, data }` body, decodes the response as a `StreamChunk[]`, and replays each chunk into the normal `ChatClient` pipeline — tool calls, approvals, thinking content, errors all behave the same way they do with SSE.

## Step 3: Expect no incremental rendering

The one thing you give up: the UI won't update character-by-character. The request hangs until the server finishes the whole run, then the full message — including tool calls, results, and the final assistant turn — appears at once.

If this becomes a problem, the answer is to move to a runtime that supports streaming responses (Hono on Node, Next.js, TanStack Start, a real SSE endpoint proxied through a CDN that doesn't buffer) rather than to work around the limitation further. The JSON-array path is a pragmatic escape hatch, not the intended happy path.

## Going back to streaming when you can

If you later deploy your server code to a runtime that *does* support streaming, you only need to change two call sites — `toJSONResponse` → `toServerSentEventsResponse` and `fetchJSON` → `fetchServerSentEvents`. Everything downstream (messages, tool calls, approvals, `useChat` state, error handling) is identical between the two paths, so there's no cleanup to chase through the app.

## Next Steps

- [Streaming](./streaming) — the normal incremental-rendering path
- [Connection Adapters](./connection-adapters) — full list of client-side adapters, including `fetchJSON`
- [API Reference: `toJSONResponse`](../api/ai#tojsonresponsestream-init) — server-side helper reference
- [API Reference: `fetchJSON`](../api/ai-client#fetchjsonurl-options) — client-side adapter reference
2 changes: 2 additions & 0 deletions docs/chat/streaming.md
Original file line number Diff line number Diff line change
Expand Up @@ -55,6 +55,8 @@ export async function POST(request: Request) {
}
```

> **Running on Expo, React Native, or another runtime that can't emit `ReadableStream` responses?** See [React Native & Expo](./non-streaming-runtimes) for the `toJSONResponse` + `fetchJSON` fallback pair.

## Client-Side Streaming

The `useChat` hook automatically handles streaming:
Expand Down
4 changes: 4 additions & 0 deletions docs/config.json
Original file line number Diff line number Diff line change
Expand Up @@ -96,6 +96,10 @@
"label": "Connection Adapters",
"to": "chat/connection-adapters"
},
{
"label": "React Native & Expo",
"to": "chat/non-streaming-runtimes"
},
{
"label": "Structured Outputs",
"to": "chat/structured-outputs"
Expand Down
11 changes: 9 additions & 2 deletions packages/typescript/ai-client/package.json
Original file line number Diff line number Diff line change
Expand Up @@ -18,12 +18,19 @@
"streaming"
],
"type": "module",
"main": "./dist/cjs/index.cjs",
"module": "./dist/esm/index.js",
"types": "./dist/esm/index.d.ts",
"exports": {
".": {
"types": "./dist/esm/index.d.ts",
"import": "./dist/esm/index.js"
"import": {
"types": "./dist/esm/index.d.ts",
"default": "./dist/esm/index.js"
},
"require": {
"types": "./dist/cjs/index.d.cts",
"default": "./dist/cjs/index.cjs"
}
}
},
"files": [
Expand Down
75 changes: 75 additions & 0 deletions packages/typescript/ai-client/src/connection-adapters.ts
Original file line number Diff line number Diff line change
Expand Up @@ -424,6 +424,81 @@ export function fetchHttpStream(
}
}

/**
* Create a JSON-array connection adapter for server runtimes that cannot
* stream `ReadableStream` responses (e.g. Expo's `@expo/server`, certain
* edge proxies). Pair with `toJSONResponse(stream)` on the server: the
* server drains the chat stream fully, JSON-serialises the collected
* chunks into an array, and this adapter fetches the array and replays
* each chunk one-by-one into the normal client pipeline.
*
* Trade-off: you lose incremental rendering — the UI sees every chunk
* only after the request resolves. Use SSE/HTTP-stream adapters when the
* runtime supports them.
*
* @param url - The API endpoint URL (or a function that returns the URL)
* @param options - Fetch options (headers, credentials, body, etc.) or a function that returns options (can be async)
* @returns A connection adapter for JSON-array responses
*
* @example
* ```typescript
* // Expo / RN client that hits an Expo API route returning toJSONResponse(stream)
* const connection = fetchJSON('/api/chat')
*
* const client = new ChatClient({ connection })
* ```
*/
export function fetchJSON(
url: string | (() => string),
options:
| FetchConnectionOptions
| (() => FetchConnectionOptions | Promise<FetchConnectionOptions>) = {},
): ConnectConnectionAdapter {
return {
async *connect(messages, data, abortSignal) {
const resolvedUrl = typeof url === 'function' ? url() : url
const resolvedOptions =
typeof options === 'function' ? await options() : options

const requestHeaders: Record<string, string> = {
'Content-Type': 'application/json',
...mergeHeaders(resolvedOptions.headers),
}

const requestBody = {
messages,
data,
...resolvedOptions.body,
}

const fetchClient = resolvedOptions.fetchClient ?? fetch
const response = await fetchClient(resolvedUrl, {
method: 'POST',
headers: requestHeaders,
body: JSON.stringify(requestBody),
credentials: resolvedOptions.credentials || 'same-origin',
signal: abortSignal || resolvedOptions.signal,
})

if (!response.ok) {
throw new Error(
`HTTP error! status: ${response.status} ${response.statusText}`,
)
}

const payload = (await response.json()) as unknown
if (!Array.isArray(payload)) {
throw new Error(
'fetchJSON: expected response body to be a JSON array of StreamChunks. Did you forget to use `toJSONResponse(stream)` on the server?',
)
}
for (const chunk of payload) {
yield chunk as StreamChunk
}
},
}
}

/**
* Create a direct stream connection adapter (for server functions or direct streams)
*
Expand Down
1 change: 1 addition & 0 deletions packages/typescript/ai-client/src/index.ts
Original file line number Diff line number Diff line change
Expand Up @@ -55,6 +55,7 @@ export type {
export {
fetchServerSentEvents,
fetchHttpStream,
fetchJSON,
stream,
rpcStream,
type ConnectConnectionAdapter,
Expand Down
Loading
Loading