Skip to content

fix(provider): don't inject stream_options for user-configured providers#19000

Open
gypelayo wants to merge 3 commits intoanomalyco:devfrom
gypelayo:fix/ollama-stream-options
Open

fix(provider): don't inject stream_options for user-configured providers#19000
gypelayo wants to merge 3 commits intoanomalyco:devfrom
gypelayo:fix/ollama-stream-options

Conversation

@gypelayo
Copy link
Copy Markdown

@gypelayo gypelayo commented Mar 24, 2026

Issue for this PR

Closes #10402

Type of change

  • Bug fix

What does this PR do?

Two related bugs affect user-configured @ai-sdk/openai-compatible providers (local Ollama, LM Studio, etc.):

1. stream_options injection causes connection failure

opencode unconditionally injects stream_options: { include_usage: true } into every streaming request for openai-compatible providers. Ollama doesn't support this field and rejects the request. The fix skips the injection when the provider source is "config" (user-defined in opencode.json).

2. Raw JSON tool calls are not executed

Some models (e.g. qwen2.5-coder) return tool calls as plain JSON text content instead of structured tool_calls, so opencode emits a text response instead of executing the tool. The fix adds a wrapStream middleware for source: "config" providers that detects this pattern — a finish_reason: stop with no tool-call parts and content that parses as {"name": "...", "arguments": {...}} matching a known tool — and converts it into a proper tool-call.

Both fixes are scoped to source: "config" providers only, so cloud providers are unaffected.

How did you verify your code works?

  • Added a test asserting stream_options is absent from the request body for a local provider. Confirmed failing before fix, passing after.
  • Added a test that mocks a local provider returning raw JSON tool call content and asserts the stream produces a tool-call part. Confirmed failing before fix, passing after.
  • Full provider and session test suites pass (250 + 118 tests).

Screenshots / recordings

No UI changes.

Checklist

  • I have tested my changes locally
  • I have not included unrelated changes in this PR

…ompatible providers

Local Ollama and similar local LLMs don't support stream_options: { include_usage: true }.
Previously this was unconditionally injected for all @ai-sdk/openai-compatible providers,
including user-configured ones (source: 'config'), causing requests to local Ollama to fail.

Only inject includeUsage for providers loaded from models.dev (source: env/api/custom),
where we know the cloud endpoint supports it. User-configured providers can still opt in
by explicitly setting options.includeUsage: true in their opencode.json.
@github-actions github-actions bot added needs:compliance This means the issue will auto-close after 2 hours. and removed needs:compliance This means the issue will auto-close after 2 hours. labels Mar 24, 2026
@github-actions
Copy link
Copy Markdown
Contributor

Thanks for updating your PR! It now meets our contributing guidelines. 👍

Some local models (e.g. qwen2.5-coder) return tool calls as plain JSON
text content instead of structured tool_calls, causing opencode to emit
a text-delta instead of a tool-call. Add a wrapStream middleware for
source=config providers that detects this pattern and converts it into a
proper tool-call stream part.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

not compatible to ollama/qwen2.5-coder:7b

1 participant