fix(azure): add azure-anthropic support to router, evaluator, copilot, and tokenization#3157
fix(azure): add azure-anthropic support to router, evaluator, copilot, and tokenization#3157waleedlatif1 wants to merge 3 commits intostagingfrom
Conversation
…, and tokenization
…ix/azure # Conflicts: # apps/sim/providers/azure-openai/index.ts
|
The latest updates on your projects. Learn more about Vercel for GitHub. |
Greptile OverviewGreptile Summary
Confidence Score: 2/5
Important Files Changed
Sequence DiagramsequenceDiagram
autonumber
participant UI as Block UI (Router/Evaluator)
participant EX as Router/Evaluator Handler
participant API as /api/providers
participant AZ as azure-openai provider
participant AOAI as Azure OpenAI SDK
participant TOOLS as Tool Registry
UI->>EX: Execute block with inputs (model, apiKey, azureEndpoint, azureApiVersion?)
EX->>EX: providerId = getProviderFromModel(model)
alt providerId == azure-openai
EX->>API: POST providerRequest + azureEndpoint + azureApiVersion
else providerId == azure-anthropic
EX->>API: POST providerRequest + azureEndpoint
else other provider
EX->>API: POST providerRequest
end
API->>AZ: dispatch executeRequest(request)
AZ->>AZ: azureEndpoint/apiVersion from request or env
AZ->>AOAI: new AzureOpenAI({endpoint, apiVersion, apiKey})
alt apiVersion startsWith 2025-
AZ->>AOAI: responses.create(payload)
loop tool iterations
AOAI-->>AZ: response output (function_call items)
AZ->>TOOLS: executeTool(name, args)
TOOLS-->>AZ: tool result
AZ->>AOAI: responses.create(nextPayload with function_call_output)
end
alt stream
AZ->>AOAI: responses.create(stream:true)
AOAI-->>AZ: streaming deltas
end
else 2024- or earlier
AZ->>AOAI: chat.completions.create(payload)
loop tool iterations
AOAI-->>AZ: tool_calls
AZ->>TOOLS: executeTool(name, args)
TOOLS-->>AZ: tool result
AZ->>AOAI: chat.completions.create(nextPayload)
end
alt stream
AZ->>AOAI: chat.completions.create(stream:true)
AOAI-->>AZ: streaming deltas
end
end
AZ-->>API: ProviderResponse or StreamingExecution
API-->>EX: result
EX-->>UI: block output
|
| const checkForForcedToolUsage = ( | ||
| response: any, | ||
| toolChoice: string | { type: string; function?: { name: string }; name?: string; any?: any } | ||
| ) => { |
There was a problem hiding this comment.
Undeclared variable usage
checkForForcedToolUsage assigns to hasUsedForcedTool, but hasUsedForcedTool is declared later (let hasUsedForcedTool = false) inside the same try block. This will throw a ReferenceError: Cannot access 'hasUsedForcedTool' before initialization on the first call to checkForForcedToolUsage(...), breaking Azure OpenAI tool-call flows.
Fix by declaring hasUsedForcedTool (and any other captured vars) before defining/using checkForForcedToolUsage (or have the helper return the updated state instead of mutating an outer binding).
There was a problem hiding this comment.
Cursor Bugbot has reviewed your changes and found 4 potential issues.
Bugbot Autofix is OFF. To automatically fix reported issues with Cloud Agents, enable Autofix in the Cursor dashboard.
| prompt: currentResponse.usage?.input_tokens || 0, | ||
| completion: currentResponse.usage?.output_tokens || 0, | ||
| total: | ||
| (currentResponse.usage?.input_tokens || 0) + (currentResponse.usage?.output_tokens || 0), |
There was a problem hiding this comment.
Token fields use wrong property names causing cost calculation to fail
High Severity
The tokens object uses prompt and completion as property names, but the ProviderResponse type and its consumers expect input and output. The cost calculation in executeProviderRequest destructures { input: promptTokens, output: completionTokens } from response.tokens, so the new property names result in both values defaulting to 0, causing incorrect cost calculations.
Additional Locations (1)
| const toolCalls: any[] = [] | ||
| const toolResults: any[] = [] | ||
| let iterationCount = 0 | ||
| const MAX_ITERATIONS = 10 |
There was a problem hiding this comment.
MAX_ITERATIONS reduced from project standard of 20 to 10
Medium Severity
The local MAX_ITERATIONS = 10 constant is inconsistent with the project-wide MAX_TOOL_ITERATIONS = 20 used by other providers like Anthropic and Bedrock. This could prematurely cut off tool call iterations for Azure OpenAI requests, causing unexpected behavior when workflows require more than 10 iterations.
Additional Locations (1)
| role: 'tool', | ||
| tool_call_id: toolCall.id, | ||
| content: JSON.stringify(resultContent), | ||
| }) |
There was a problem hiding this comment.
Tool handling creates incorrect message structure for multi-tool responses
High Severity
When multiple tool calls are returned in a single response, the code creates a separate assistant message for each tool call inside the loop instead of one assistant message containing all tool calls. The OpenAI Chat Completions API expects a single assistant message with all tool_calls followed by individual tool result messages. This incorrect message structure will likely cause API errors or incorrect behavior when multiple tools are called simultaneously.
| const remainingTools = forcedTools.filter((tool) => !usedForcedTools.includes(tool)) | ||
|
|
||
| if (remainingTools.length > 0) { | ||
| nextPayload.tool_choice = { |
There was a problem hiding this comment.
Responses API path missing forced tool tracking for iterations
Medium Severity
The Responses API path (2025+ API versions) always sets tool_choice: 'auto' for subsequent requests, completely ignoring any remaining forced tools. Unlike the Chat Completions path which tracks usedForcedTools and hasUsedForcedTool and properly enforces remaining forced tools on subsequent iterations (lines 943-956), the Responses API path has no such tracking. Users configuring multiple tools with usageControl: 'force' will only have the initial tool_choice enforced; subsequent iterations will not force the remaining tools.


Summary
getProviderCredentialSubBlocks()so router and evaluator blocks show the Azure endpoint fieldazureEndpointin router and evaluator handlers for azure-anthropic providerType of Change
Testing
Tested manually
Checklist