Summary
We integrated MiniMax through Sourcebot Ask and captured the real upstream traffic with a transparent proxy. We found two classes of failures under the real tool-calling workflow:
- Streaming requests sometimes return
200 OK with text/event-stream but never terminate correctly, eventually ending in client-side timeout.
- In OpenAI-compatible tool-calling mode, some requests fail because the model / compatibility layer emits invalid function argument JSON.
This is reproducible under a real tool-calling workload, not only with trivial prompts.
Client context
Client stack:
- Sourcebot Ask
- Node 24
- AI SDK-based provider integration
- tested with both:
- OpenAI-compatible
/v1/chat/completions
- Anthropic-compatible
/anthropic/v1/messages
Failure 1: stream starts, then never ends
Observed behavior
For some requests, the server returns:
200 OK
content-type: text/event-stream
Streaming begins, but the response never completes normally. The client later fails with read timeout:
read ETIMEDOUT
- surface symptom in client:
terminated
This is not a generic network issue:
- the same host/container can complete independent long
fetch/undici streams successfully
- the problem appears only under the real Ask/tool-calling workflow
Request characteristics when this happens
Typical requests include:
stream=true
- tool calling enabled
- accumulated multi-turn messages
- large tool result payloads
- in Anthropic-compatible mode,
thinking may also be enabled
Example sanitized evidence: https://gist.github.com/leozhengliu-pixel/6252d9b8415ab65dd106d11e2bf59da0
Failure 2: invalid function arguments in OpenAI-compatible mode
Observed behavior
A real OpenAI-compatible request produced this tool call:
{
"id": "call_function_j32rtdm8f446_1",
"type": "function",
"function": {
"name": "read_file",
"arguments": "\"{\""
}
}
This is not valid function argument JSON for the declared tool schema. The client then fails tool execution with:
Invalid input for tool read_file: JSON parsing failed: Text: {.
Error message: Expected property name or '}' in JSON at position 1 (line 1 column 2)
The upstream response for that request is 400 Bad Request.
Important detail
This is not only a small-model quality issue. We are seeing both:
- malformed tool arguments
- hanging SSE streams
So the compatibility problem appears broader than prompt quality.
What we need clarified
- In your OpenAI-compatible API, what exact guarantees do you provide for:
tool_calls[].function.arguments
- streaming termination semantics
- chunk / finish events
- In your Anthropic-compatible API, are there known limitations with:
- tool use
- thinking
- long-running streamed responses
- Are there specific unsupported combinations such as:
thinking + tools
- large multi-turn tool-result context
- streaming + tool calling under compatibility mode
- Do you recommend disabling any of the following for compatibility:
- streaming
- thinking
- automatic tool choice
- title-generation style side requests
Evidence available
We have captured:
- exact request payload characteristics
- exact response status/headers
- cases where stream starts but never completes
- cases where tool arguments are malformed and rejected
I can provide more sanitized payloads if needed.
Summary
We integrated MiniMax through Sourcebot Ask and captured the real upstream traffic with a transparent proxy. We found two classes of failures under the real tool-calling workflow:
200 OKwithtext/event-streambut never terminate correctly, eventually ending in client-side timeout.This is reproducible under a real tool-calling workload, not only with trivial prompts.
Client context
Client stack:
/v1/chat/completions/anthropic/v1/messagesFailure 1: stream starts, then never ends
Observed behavior
For some requests, the server returns:
200 OKcontent-type: text/event-streamStreaming begins, but the response never completes normally. The client later fails with read timeout:
read ETIMEDOUTterminatedThis is not a generic network issue:
fetch/undicistreams successfullyRequest characteristics when this happens
Typical requests include:
stream=truethinkingmay also be enabledExample sanitized evidence: https://gist.github.com/leozhengliu-pixel/6252d9b8415ab65dd106d11e2bf59da0
Failure 2: invalid function arguments in OpenAI-compatible mode
Observed behavior
A real OpenAI-compatible request produced this tool call:
{ "id": "call_function_j32rtdm8f446_1", "type": "function", "function": { "name": "read_file", "arguments": "\"{\"" } }This is not valid function argument JSON for the declared tool schema. The client then fails tool execution with:
The upstream response for that request is
400 Bad Request.Important detail
This is not only a small-model quality issue. We are seeing both:
So the compatibility problem appears broader than prompt quality.
What we need clarified
tool_calls[].function.argumentsthinking + toolsEvidence available
We have captured:
I can provide more sanitized payloads if needed.