feat: dual ESM+CJS builds + toJSONResponse/fetchJSON for non-streaming runtimes#478
feat: dual ESM+CJS builds + toJSONResponse/fetchJSON for non-streaming runtimes#478AlemTuzlak wants to merge 7 commits intomainfrom
Conversation
Fixes #308 and #309. - @tanstack/ai, @tanstack/ai-client, @tanstack/ai-event-client now emit both dist/esm/*.js and dist/cjs/*.cjs with matching .d.cts files. package.json exports gained nested import/require conditions plus a `main` field so Metro / Expo / other CJS-only resolvers can find the subpath exports (`./adapters`, `./middlewares`, etc.). - New toJSONResponse(stream, init?) on @tanstack/ai: drains the stream and returns a JSON-array Response. For runtimes that can't stream ReadableStream bodies (Expo's @expo/server, edge proxies). - New fetchJSON(url, options?) connection adapter on @tanstack/ai-client: the client-side counterpart — fetches the JSON array and replays each chunk into the normal ChatClient pipeline. - Trade-off documented in both: you lose incremental rendering; use SSE / HTTP-stream responses when the runtime supports them.
🚀 Changeset Version Preview12 package(s) bumped directly, 21 bumped as dependents. 🟥 Major bumps
🟨 Minor bumps
🟩 Patch bumps
|
|
View your CI Pipeline Execution ↗ for commit 43e59bb
☁️ Nx Cloud last updated this comment at |
|
No actionable comments were generated in the recent review. 🎉 ℹ️ Recent review info⚙️ Run configurationConfiguration used: defaults Review profile: CHILL Plan: Pro Run ID: 📒 Files selected for processing (6)
✅ Files skipped from review due to trivial changes (1)
🚧 Files skipped from review as they are similar to previous changes (2)
📝 WalkthroughWalkthroughAdds dual ESM/CJS package entrypoints for TanStack AI packages and introduces two APIs: server-side Changes
Sequence Diagram(s)sequenceDiagram
participant Client as Client
participant ChatClient as ChatClient
participant Server as Server
participant StreamProducer as StreamProducer
Client->>ChatClient: call fetchJSON(url, options)
ChatClient->>Server: POST { messages, data }
Server->>StreamProducer: start async chat stream
StreamProducer-->>Server: yield StreamChunk...
Server->>Server: toJSONResponse(stream) drains all chunks -> JSON array
Server-->>ChatClient: HTTP 200 body: [StreamChunk, ...]
ChatClient->>ChatClient: parse array, replay chunks into pipeline
ChatClient->>Client: deliver reconstructed events (non-incremental)
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes Poem
🚥 Pre-merge checks | ✅ 5✅ Passed checks (5 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
@tanstack/ai
@tanstack/ai-anthropic
@tanstack/ai-client
@tanstack/ai-code-mode
@tanstack/ai-code-mode-skills
@tanstack/ai-devtools-core
@tanstack/ai-elevenlabs
@tanstack/ai-event-client
@tanstack/ai-fal
@tanstack/ai-gemini
@tanstack/ai-grok
@tanstack/ai-groq
@tanstack/ai-isolate-cloudflare
@tanstack/ai-isolate-node
@tanstack/ai-isolate-quickjs
@tanstack/ai-ollama
@tanstack/ai-openai
@tanstack/ai-openrouter
@tanstack/ai-preact
@tanstack/ai-react
@tanstack/ai-react-ui
@tanstack/ai-solid
@tanstack/ai-solid-ui
@tanstack/ai-svelte
@tanstack/ai-vue
@tanstack/ai-vue-ui
@tanstack/preact-ai-devtools
@tanstack/react-ai-devtools
@tanstack/solid-ai-devtools
commit: |
There was a problem hiding this comment.
🧹 Nitpick comments (3)
packages/typescript/ai-client/src/connection-adapters.ts (1)
495-497: Optional: honorabortSignalwhile replaying chunks.Since the whole payload is already in memory, the loop ignores
abortSignalduring replay. If the consumer aborts late (e.g., user navigates away before chunks are drained by the pipeline), chunks will continue to flow. Consider a cheap check to bail out early:♻️ Suggested tweak
for (const chunk of payload) { + if (abortSignal?.aborted) break yield chunk as StreamChunk }Not a blocker given the buffered/non-streaming nature of this adapter.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/typescript/ai-client/src/connection-adapters.ts` around lines 495 - 497, The replay loop that yields chunks from the in-memory payload currently ignores abortSignal and continues pushing chunks even after cancellation; inside the loop that iterates over payload and yields each item (the block yielding chunk as StreamChunk), check the provided abortSignal (e.g., abortSignal?.aborted) on each iteration and bail out immediately (return/stop iteration) when aborted so the generator stops producing further StreamChunk values.packages/typescript/ai-client/package.json (1)
21-35: Dual exports look correct; consider exposing./package.json.The nested
import/requireconditions with type-awaretypeskeys are the recommended Node resolution shape, andmain→.cjspairs correctly with"type": "module"(Node uses the extension to disambiguate). publint strict passing is a good signal.Optional: add
"./package.json": "./package.json"toexportsso tools that probe the manifest (some bundlers, version resolvers) don't get blocked by the closed export map. Same applies topackages/typescript/ai/package.jsonandpackages/typescript/ai-event-client/package.json.Proposed addition
"exports": { ".": { "import": { "types": "./dist/esm/index.d.ts", "default": "./dist/esm/index.js" }, "require": { "types": "./dist/cjs/index.d.cts", "default": "./dist/cjs/index.cjs" } - } + }, + "./package.json": "./package.json" },🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/typescript/ai-client/package.json` around lines 21 - 35, Add an explicit export entry for the package manifest so consumers and tooling can read it: update the package.json "exports" object to include the key "./package.json" mapping to "./package.json" (mirror this change in packages/typescript/ai/package.json and packages/typescript/ai-event-client/package.json as well); locate the "exports" block that currently defines "." with "import"/"require" and add the "./package.json": "./package.json" mapping alongside those entries.packages/typescript/ai/tests/stream-to-response.test.ts (1)
875-945: LGTM — good coverage fortoJSONResponse.Tests cover the four meaningful branches (defaults, custom init/headers, explicit Content-Type passthrough, and abort-on-upstream-error with rethrow). Nice use of
toHaveBeenCalledOnce()to assert abort happens exactly once.One optional addition worth considering: a test that asserts the controller is not aborted when the stream drains successfully, to lock in that behavior against regressions.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/typescript/ai/tests/stream-to-response.test.ts` around lines 875 - 945, Add a test to ensure the provided AbortController is NOT aborted when the stream drains successfully: create an AbortController, spy on its abort method (vi.spyOn(abortController, 'abort')), call toJSONResponse with createMockStream([...successful chunks...]) and the abortController in options, await the response.json() (or response completion), then assert abortSpy was not called (toHaveBeenCalledTimes(0) / not.toHaveBeenCalled()). Reference toJSONResponse, createMockStream, and AbortController/abort in the test so behavior is covered alongside the existing abort-on-error test.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@packages/typescript/ai-client/package.json`:
- Around line 21-35: Add an explicit export entry for the package manifest so
consumers and tooling can read it: update the package.json "exports" object to
include the key "./package.json" mapping to "./package.json" (mirror this change
in packages/typescript/ai/package.json and
packages/typescript/ai-event-client/package.json as well); locate the "exports"
block that currently defines "." with "import"/"require" and add the
"./package.json": "./package.json" mapping alongside those entries.
In `@packages/typescript/ai-client/src/connection-adapters.ts`:
- Around line 495-497: The replay loop that yields chunks from the in-memory
payload currently ignores abortSignal and continues pushing chunks even after
cancellation; inside the loop that iterates over payload and yields each item
(the block yielding chunk as StreamChunk), check the provided abortSignal (e.g.,
abortSignal?.aborted) on each iteration and bail out immediately (return/stop
iteration) when aborted so the generator stops producing further StreamChunk
values.
In `@packages/typescript/ai/tests/stream-to-response.test.ts`:
- Around line 875-945: Add a test to ensure the provided AbortController is NOT
aborted when the stream drains successfully: create an AbortController, spy on
its abort method (vi.spyOn(abortController, 'abort')), call toJSONResponse with
createMockStream([...successful chunks...]) and the abortController in options,
await the response.json() (or response completion), then assert abortSpy was not
called (toHaveBeenCalledTimes(0) / not.toHaveBeenCalled()). Reference
toJSONResponse, createMockStream, and AbortController/abort in the test so
behavior is covered alongside the existing abort-on-error test.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: d831b837-151e-456f-8d68-13b77d844f5a
📒 Files selected for processing (12)
.changeset/cjs-output-and-json-response.mdpackages/typescript/ai-client/package.jsonpackages/typescript/ai-client/src/connection-adapters.tspackages/typescript/ai-client/src/index.tspackages/typescript/ai-client/vite.config.tspackages/typescript/ai-event-client/package.jsonpackages/typescript/ai-event-client/vite.config.tspackages/typescript/ai/package.jsonpackages/typescript/ai/src/index.tspackages/typescript/ai/src/stream-to-response.tspackages/typescript/ai/tests/stream-to-response.test.tspackages/typescript/ai/vite.config.ts
…rences Serves three personas: Expo/RN builders hitting streaming-response crashes, builders on other non-streaming runtimes (edge proxies, legacy serverless), and evaluators checking whether TanStack AI supports RN/Expo. - New journey page at docs/chat/non-streaming-runtimes.md titled 'React Native & Expo'. A → B: Expo API route crashing on streaming response → working chat via toJSONResponse + fetchJSON. - Cross-linked from chat/streaming.md (callout near toServerSentEventsResponse) and chat/connection-adapters.md (new 'JSON Array (non-streaming runtimes)' subsection). - Added the new entries to the API references: toJSONResponse in docs/api/ai.md and fetchJSON in docs/api/ai-client.md, each pointing back to the walkthrough. - Registered the new page in docs/config.json under 'Chat & Streaming', sequenced right after Connection Adapters.
There was a problem hiding this comment.
🧹 Nitpick comments (1)
docs/api/ai-client.md (1)
145-184: Consider varying sentence structure to improve readability.Three connection adapter sections in succession begin with "Creates," making the documentation slightly repetitive. Consider varying the opening phrase for better flow.
✍️ Suggested rewording
### `fetchServerSentEvents(url, options?)` -Creates an SSE connection adapter. +Establishes an SSE connection adapter for server-sent events streaming.or
### `fetchJSON(url, options?)` -Creates a connection adapter for non-streaming runtimes — pair with [`toJSONResponse`](./ai#tojsonresponsestream-init) on the server. +Provides a connection adapter for non-streaming runtimes — pair with [`toJSONResponse`](./ai#tojsonresponsestream-init) on the server.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/api/ai-client.md` around lines 145 - 184, The three adapter descriptions (fetchServerSentEvents, fetchHttpStream, fetchJSON) all start with the same verb "Creates," making the copy repetitive; update the lead sentence for one or two of these functions to vary phrasing (e.g., "Opens an SSE connection adapter for...", "Provides an HTTP stream adapter that...", or "Returns a JSON-based adapter for non-streaming runtimes...") while keeping the technical details intact (include options example and the note about POSTing { messages, data } for fetchJSON and the trade-off about no incremental rendering), and ensure the function names fetchServerSentEvents, fetchHttpStream, and fetchJSON remain present so readers can locate the API.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@docs/api/ai-client.md`:
- Around line 145-184: The three adapter descriptions (fetchServerSentEvents,
fetchHttpStream, fetchJSON) all start with the same verb "Creates," making the
copy repetitive; update the lead sentence for one or two of these functions to
vary phrasing (e.g., "Opens an SSE connection adapter for...", "Provides an HTTP
stream adapter that...", or "Returns a JSON-based adapter for non-streaming
runtimes...") while keeping the technical details intact (include options
example and the note about POSTing { messages, data } for fetchJSON and the
trade-off about no incremental rendering), and ensure the function names
fetchServerSentEvents, fetchHttpStream, and fetchJSON remain present so readers
can locate the API.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: c65a6675-dcd6-4072-9b42-d6a27f6233ca
📒 Files selected for processing (6)
docs/api/ai-client.mddocs/api/ai.mddocs/chat/connection-adapters.mddocs/chat/non-streaming-runtimes.mddocs/chat/streaming.mddocs/config.json
✅ Files skipped from review due to trivial changes (4)
- docs/config.json
- docs/chat/non-streaming-runtimes.md
- docs/chat/streaming.md
- docs/chat/connection-adapters.md
…on-response # Conflicts: # packages/typescript/ai/package.json
…JSON Address CR findings: - toJSONResponse now checks `abortController.signal.aborted` on entry (throws the signal's reason without draining the upstream) and inside the drain loop (breaks early if aborted mid-stream), matching the semantics of toServerSentEventsStream and toHttpStream. Previously the signal was only consulted from the error-path catch handler, so a pre-aborted controller drained the full stream anyway and a mid-drain abort was silently ignored. - Add two new tests covering pre-abort (infinite stream never pulled) and mid-drain abort (bounded pulls after abort fires). - Add 8 fetchJSON tests covering happy path, non-2xx, non-array body with descriptive error, url-as-function, options-as-async-function, options.body merging, custom fetchClient override, and AbortSignal propagation — the adapter previously had zero direct test coverage.
Summary
Two related fixes for Expo / Metro / non-streaming runtimes.
#308 — dual ESM + CJS output
`@tanstack/ai`, `@tanstack/ai-client`, and `@tanstack/ai-event-client` were ESM-only (`import` condition only, no `require` / `default`). Metro can't resolve that configuration, even with `unstable_enablePackageExports: true`, so consumers saw `Cannot resolve @tanstack/ai/adapters` etc.
Changes:
#309 — `toJSONResponse` + `fetchJSON`
Expo's `@expo/server` can't emit `ReadableStream` responses, so `toServerSentEventsResponse` / `toHttpResponse` crash with `Cannot read properties of undefined (reading 'statusText')`.
Changes:
Trade-off: you lose incremental rendering — the UI sees everything at once when the request resolves. Docs in both JSDoc blocks call this out and tell users to prefer SSE / HTTP-stream when the runtime supports them.
Test plan
Summary by CodeRabbit
New Features
Documentation
Tests