OpenCode supports 21+ LLM providers through the Vercel AI SDK. Providers are loaded from a combination of bundled SDK packages, custom initialization logic, and the models.dev model catalog API.
opencode.json models.dev API Environment
(user config) (model catalog) Variables
| | |
+----------+-----------+----------+-----------+
| |
Config.load() ModelsDev.refresh()
| |
+----------+-----------+
|
Provider.initProvider()
|
+--------------+--------------+
| | |
BUNDLED_ CUSTOM_ Plugin
PROVIDERS LOADERS Auth
| | |
+------+-------+------+------+
| |
SDK Instance Provider.Info
(createOpenAI, (models,
createAnthropic, options,
etc.) variants)
|
LanguageModelV2
|
Session.prompt()
These SDK packages are imported at build time and available without runtime installation:
| Provider | Package | Environment Variable |
|---|---|---|
| Amazon Bedrock | @ai-sdk/amazon-bedrock |
AWS_ACCESS_KEY_ID + AWS_SECRET_ACCESS_KEY or AWS_BEARER_TOKEN |
| Anthropic | @ai-sdk/anthropic |
ANTHROPIC_API_KEY |
| Azure OpenAI | @ai-sdk/azure |
AZURE_OPENAI_API_KEY + AZURE_RESOURCE_NAME |
| Cerebras | @ai-sdk/cerebras |
CEREBRAS_API_KEY |
| Cohere | @ai-sdk/cohere |
COHERE_API_KEY |
| DeepInfra | @ai-sdk/deepinfra |
DEEPINFRA_API_KEY |
| Gateway | @ai-sdk/gateway |
GATEWAY_API_KEY |
| GitHub Copilot | Custom (./sdk/copilot) |
Copilot OAuth token |
| GitLab Duo | @gitlab/gitlab-ai-provider |
GITLAB_API_TOKEN or OAuth |
| Google AI | @ai-sdk/google |
GOOGLE_GENERATIVE_AI_API_KEY |
| Google Vertex | @ai-sdk/google-vertex |
GOOGLE_CLOUD_PROJECT |
| Google Vertex (Anthropic) | @ai-sdk/google-vertex/anthropic |
GOOGLE_CLOUD_PROJECT |
| Groq | @ai-sdk/groq |
GROQ_API_KEY |
| Mistral | @ai-sdk/mistral |
MISTRAL_API_KEY |
| OpenAI | @ai-sdk/openai |
OPENAI_API_KEY |
| OpenAI Compatible | @ai-sdk/openai-compatible |
Provider-specific |
| OpenRouter | @openrouter/ai-sdk-provider |
OPENROUTER_API_KEY |
| Perplexity | @ai-sdk/perplexity |
PERPLEXITY_API_KEY |
| Together AI | @ai-sdk/togetherai |
TOGETHER_AI_API_KEY |
| Vercel | @ai-sdk/vercel |
VERCEL_API_KEY |
| XAI (Grok) | @ai-sdk/xai |
XAI_API_KEY |
These providers have custom initialization logic beyond the standard SDK constructor. Defined in src/provider/provider.ts CUSTOM_LOADERS:
- Custom headers:
claude-code-20250219,interleaved-thinking-2025-05-14,fine-grained-tool-streaming-2025-05-14 - Autoload: false (requires API key)
- Custom model routing: Uses
sdk.responses(modelID)for the OpenAI Responses API - Autoload: false
- Custom model routing: Routes to
responses()orchat()based on model version (GPT-5+ uses Responses API) - Custom SDK: Built-in
./sdk/copilotimplementation - Auth: Copilot OAuth token flow
- Autoload: false
- Custom model routing: Routes to
responses()orchat() - Custom vars: Resolves
AZURE_RESOURCE_NAMEfrom config or environment - Autoload: false
- Complex credential handling: Bearer token > credential chain > profiles > IAM roles > web identity tokens
- Region-specific model prefixes:
us.,eu.,jp.,au.,global.,apac.based on region and model - Autoload: true (if AWS credentials configured)
- Custom authentication: Google Cloud Auth library for service account credentials
- Custom vars: Resolves
GOOGLE_CLOUD_PROJECT,GOOGLE_VERTEX_LOCATION - Autoload: true (if project configured)
- Complex auth: OAuth or API token, instance URL resolution
- Feature flags:
duo_agent_platform_agentic_chat,duo_agent_platform - Custom headers: User-Agent, anthropic-beta for extended context
- Autoload: true (if API key available)
- Custom auth: From environment or Auth storage
- Custom vars: Resolves
CLOUDFLARE_ACCOUNT_ID - Autoload: conditional
- Dynamic import:
ai-gateway-providerpackage (loaded at runtime, not bundled) - Features: Metadata, cache settings (TTL, key, skip), unified API format
- Autoload: true
- Service key auth: From environment or Auth storage
- Autoload: conditional
- Custom headers: HTTP-Referer, X-Title, or integration identifiers
- Autoload: false
Models are fetched from the models.dev API:
- Endpoint:
${OPENCODE_MODELS_URL}/api.json(default:https://models.dev/api.json) - Fallback: Bundled snapshot at
./models-snapshotif fetch fails - Cache: Stored at
~/.opencode/cache/models.json - Refresh: On startup + hourly interval
Each model from models.dev includes:
- Identity:
id,name,family,release_date - Capabilities:
temperature,reasoning,tool_call,attachment,interleaved - Cost:
input,output,cache(per million tokens) - Limits:
context,input,outputtoken counts - Modalities: input/output support for
text,audio,image,video,pdf - Status:
alpha,beta,deprecated, or active - Provider info:
npmpackage,apiendpoint - Variants: named configuration presets (e.g., "fast", "extended-thinking")
- Fetch models from models.dev API (or fallback snapshot)
- Apply
CUSTOM_LOADERStransformations (custom auth, headers, model routing) - Merge with user config overrides (
opencode.jsonprovider settings) - Filter by enabled/disabled provider lists
- Apply per-provider model blacklist/whitelist
- Apply variant transformations per model
src/provider/transform.ts handles message and schema transformations:
| Transform | Purpose |
|---|---|
normalizeMessages() |
Converts messages to provider-expected format (reasoning part extraction, content structure) |
schema() |
Converts Zod JSON Schema to provider-compatible format (Gemini sanitization, strict mode) |
options() |
Builds provider-specific options (store, reasoning config, cache keys, prompt caching) |
variants() |
Maps variant names to provider option overrides (e.g., "extended-thinking" → reasoning budget) |
providerOptions() |
Restructures flat options into nested provider option namespaces |
smallOptions() |
Builds minimal options for small/fast model calls (evaluator, title generation) |
Users can add providers via opencode.json:
{
"provider": {
"my-provider": {
"npm": "@ai-sdk/openai-compatible",
"api": {
"url": "https://api.my-provider.com/v1"
},
"env": ["MY_PROVIDER_API_KEY"],
"models": {
"my-model": {
"name": "My Model",
"attachment": true,
"tool_call": true
}
}
}
}
}Alternatively, providers can register through the plugin system (plugin.auth.loader).
The provider system is identical to upstream OpenCode — all 21+ providers, custom loaders, models.dev integration, and transform pipeline are upstream-compatible.
Frankencode adds features at the session/tool layer that work with all providers:
- Verify tool (
src/tool/verify.ts) — runs test/lint/typecheck with circuit breaker, uses session's current provider - Refine tool (
src/tool/refine.ts) — evaluator-optimizer loop, spawns child sessions on the same provider - Evaluator/optimizer agents — use the session's model for code review scoring
See FRANKENCODE.md for the complete list.
- EFFECTIFICATION.md — Effect architecture (ProviderAuthService, ConfigService are Effect services)
- agents.md — agents that use providers (evaluator, optimizer inherit session model)
- AGENT_CLIENT_PROTOCOL.md — model selection via ACP NewSessionRequest
- FRANKENCODE.md — all Frankencode vs OpenCode differences