feat: implement default model configuration system#1330
Conversation
- Add shared modelConfigDefaults with standardized settings - Update configPresenter to use provider-driven model config - Refactor LLM providers to use new model config API - Update modelStore and ollamaStore for defaults support - Add/update tests for new model config behavior - Update ModelConfigDialog UI component Co-Authored-By: DimCode <noreply@dimcode.dev>
📝 WalkthroughWalkthroughThis PR centralizes model configuration defaults by introducing a shared module with resolver functions and constants. It replaces hardcoded capability values (context length, max tokens, vision, function call) across presenter and renderer implementations with centralized defaults, and adds search-related fields to model configuration types. Changes
Estimated code review effort🎯 4 (Complex) | ⏱️ ~40 minutes Possibly related PRs
Suggested labels
Poem
🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches
🧪 Generate unit tests (beta)
Tip Try Coding Plans. Let us write the prompt for your AI agent so you can ship faster (with fewer bugs). Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (4)
src/main/presenter/llmProviderPresenter/providers/ollamaProvider.ts (2)
342-373:⚠️ Potential issue | 🟡 MinorKeep the error fallback aligned with the shared context default.
Line 347 now uses
DEFAULT_MODEL_CONTEXT_LENGTH, but the catch path on Line 369 still hardcodes4096. The same model will report different context windows depending on whethershowModelInfo()succeeds.🩹 Proposed fix
return { ...model, model_info: { - context_length: 4096, + context_length: DEFAULT_MODEL_CONTEXT_LENGTH, embedding_length: 512 }, capabilities: ['chat'] }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/main/presenter/llmProviderPresenter/providers/ollamaProvider.ts` around lines 342 - 373, The catch block in attachModelInfo falls back to a hardcoded context_length of 4096 which is inconsistent with the success path that uses DEFAULT_MODEL_CONTEXT_LENGTH; update the fallback to use DEFAULT_MODEL_CONTEXT_LENGTH (and keep embedding_length and capabilities as-is) so showModelInfo success and error paths return consistent model_info values for the same model name.
82-91:⚠️ Potential issue | 🟠 MajorUse the metadata returned by
attachModelInfo()here.
listModels()already enriches each entry withmodel_info.context_lengthandcapabilities, but this map still emits a generic default context length and no explicit capability flags. Locally discovered Ollama models therefore lose provider-reported context, vision, and tool-calling info even though it was already fetched.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/main/presenter/llmProviderPresenter/providers/ollamaProvider.ts` around lines 82 - 91, The mapping in listModels() currently ignores metadata added by attachModelInfo(): update the map that builds each model object (the arrow mapping returning id/name/providerId/...) to use model.model_info.context_length (falling back to DEFAULT_MODEL_CONTEXT_LENGTH) for contextLength, pull maxTokens from model.model_info or keep DEFAULT_MODEL_MAX_TOKENS if absent, and propagate capability flags from model.capabilities (e.g., set isVision/supportsToolCalling or a capabilities field) so provider-reported vision/tool-calling/context info is preserved; reference the existing symbols model.model_info and model.capabilities in ollamaProvider.ts when applying these changes.src/renderer/src/stores/modelStore.ts (2)
195-219:⚠️ Potential issue | 🟠 MajorPreserve user-defined
enableSearchwhen rehydrating renderer models.This merge path re-applies
visionandfunctionCall, but it never copiesconfig.enableSearch. After saving a user-defined search setting,refreshProviderModels()will rebuildallProviderModels/enabledModelswith the provider default instead of the saved one.🩹 Proposed fix
return { ...normalized, contextLength: resolveModelContextLength( config.contextLength ?? normalized.contextLength ), maxTokens: resolvedMaxTokens, vision: resolveModelVision(config.vision ?? normalized.vision), functionCall: resolveModelFunctionCall(config.functionCall ?? normalized.functionCall), reasoning: config.reasoning ?? normalized.reasoning ?? false, + enableSearch: config.enableSearch ?? normalized.enableSearch ?? false, type: config.type ?? normalized.type ?? ModelType.Chat }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/renderer/src/stores/modelStore.ts` around lines 195 - 219, When rehydrating a model from modelConfigStore.getModelConfig you need to preserve a user-defined enableSearch value; update the returned object built from normalized so it includes enableSearch: config.enableSearch ?? normalized.enableSearch (so refreshProviderModels()/allProviderModels/enabledModels use the saved setting), similar to how vision, functionCall, reasoning and type are resolved via resolveModelVision/resolveModelFunctionCall and the config fallbacks.
336-352:⚠️ Potential issue | 🟠 MajorNormalize the
llmP.getModelList()fallback through the same resolver path.Lines 347-352 switch the main merge flow to the shared resolvers, but the later
llmP.getModelList()branch still hardcodes4096/2048andfunctionCall: false. Any provider that falls back to that branch bypasses the new default-model system and ends up with different capabilities than the rest of the store.Also applies to: 395-413
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/renderer/src/stores/modelStore.ts` around lines 336 - 352, The fallback branch that builds a RENDERER_MODEL_META for llmP.getModelList() currently hardcodes context/maxTokens/functionCall (4096/2048/false) and thus bypasses the shared resolvers; update that fallback construction to call the same resolver helpers used in normalizeStoredModel — e.g., use resolveModelContextLength(...), resolveModelMaxTokens(...), resolveModelVision(...), and resolveModelFunctionCall(...) when deriving contextLength, maxTokens, vision and functionCall for models returned from llmP.getModelList(), and do the same fix in the other similar block (the second fallback that mirrors normalizeStoredModel) so all providers use the unified resolver path rather than hardcoded defaults.
🧹 Nitpick comments (1)
src/shared/modelConfigDefaults.ts (1)
15-25: Consider addingresolveModelReasoningfor completeness.The module defines
DEFAULT_MODEL_REASONINGconstant but doesn't export a corresponding resolver function like the other capabilities. For consistency and potential future use, consider adding:export const resolveModelReasoning = (value: boolean | undefined | null): boolean => value ?? DEFAULT_MODEL_REASONINGThis isn't blocking since providers currently use
Boolean(model.reasoning?.supported)directly, but adding it would complete the resolver pattern.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/shared/modelConfigDefaults.ts` around lines 15 - 25, Add a resolver for the reasoning capability to mirror the existing pattern: implement and export resolveModelReasoning that accepts (value: boolean | undefined | null) and returns value ?? DEFAULT_MODEL_REASONING, similar to resolveModelVision and resolveModelFunctionCall; ensure the function is exported alongside the other resolvers and references the DEFAULT_MODEL_REASONING constant.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/shared/types/presenters/legacy.presenters.d.ts`:
- Around line 171-174: The searchStrategy union is inconsistent between
ModelConfig and presenter/defaults; update IConfigPresenter.getSearchDefaults(),
DefaultModelSetting.searchStrategy and any other presenter/type declarations
(reference symbols: searchStrategy, IConfigPresenter.getSearchDefaults,
DefaultModelSetting, ModelConfig) to use a single unified union ('balanced' |
'precise') or, if the storage layer still requires legacy values, implement an
explicit mapping at the presenter boundary that converts between legacy
('turbo'|'max') and new ('balanced'|'precise') values so defaults, config APIs
and storage remain consistent and round-trip lossless.
---
Outside diff comments:
In `@src/main/presenter/llmProviderPresenter/providers/ollamaProvider.ts`:
- Around line 342-373: The catch block in attachModelInfo falls back to a
hardcoded context_length of 4096 which is inconsistent with the success path
that uses DEFAULT_MODEL_CONTEXT_LENGTH; update the fallback to use
DEFAULT_MODEL_CONTEXT_LENGTH (and keep embedding_length and capabilities as-is)
so showModelInfo success and error paths return consistent model_info values for
the same model name.
- Around line 82-91: The mapping in listModels() currently ignores metadata
added by attachModelInfo(): update the map that builds each model object (the
arrow mapping returning id/name/providerId/...) to use
model.model_info.context_length (falling back to DEFAULT_MODEL_CONTEXT_LENGTH)
for contextLength, pull maxTokens from model.model_info or keep
DEFAULT_MODEL_MAX_TOKENS if absent, and propagate capability flags from
model.capabilities (e.g., set isVision/supportsToolCalling or a capabilities
field) so provider-reported vision/tool-calling/context info is preserved;
reference the existing symbols model.model_info and model.capabilities in
ollamaProvider.ts when applying these changes.
In `@src/renderer/src/stores/modelStore.ts`:
- Around line 195-219: When rehydrating a model from
modelConfigStore.getModelConfig you need to preserve a user-defined enableSearch
value; update the returned object built from normalized so it includes
enableSearch: config.enableSearch ?? normalized.enableSearch (so
refreshProviderModels()/allProviderModels/enabledModels use the saved setting),
similar to how vision, functionCall, reasoning and type are resolved via
resolveModelVision/resolveModelFunctionCall and the config fallbacks.
- Around line 336-352: The fallback branch that builds a RENDERER_MODEL_META for
llmP.getModelList() currently hardcodes context/maxTokens/functionCall
(4096/2048/false) and thus bypasses the shared resolvers; update that fallback
construction to call the same resolver helpers used in normalizeStoredModel —
e.g., use resolveModelContextLength(...), resolveModelMaxTokens(...),
resolveModelVision(...), and resolveModelFunctionCall(...) when deriving
contextLength, maxTokens, vision and functionCall for models returned from
llmP.getModelList(), and do the same fix in the other similar block (the second
fallback that mirrors normalizeStoredModel) so all providers use the unified
resolver path rather than hardcoded defaults.
---
Nitpick comments:
In `@src/shared/modelConfigDefaults.ts`:
- Around line 15-25: Add a resolver for the reasoning capability to mirror the
existing pattern: implement and export resolveModelReasoning that accepts
(value: boolean | undefined | null) and returns value ??
DEFAULT_MODEL_REASONING, similar to resolveModelVision and
resolveModelFunctionCall; ensure the function is exported alongside the other
resolvers and references the DEFAULT_MODEL_REASONING constant.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 89ec64e8-f464-4b4f-b032-ce4a25c862b5
📒 Files selected for processing (24)
src/main/presenter/configPresenter/index.tssrc/main/presenter/configPresenter/modelConfig.tssrc/main/presenter/llmProviderPresenter/providers/dashscopeProvider.tssrc/main/presenter/llmProviderPresenter/providers/doubaoProvider.tssrc/main/presenter/llmProviderPresenter/providers/githubProvider.tssrc/main/presenter/llmProviderPresenter/providers/minimaxProvider.tssrc/main/presenter/llmProviderPresenter/providers/o3fanProvider.tssrc/main/presenter/llmProviderPresenter/providers/ollamaProvider.tssrc/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.tssrc/main/presenter/llmProviderPresenter/providers/openAIResponsesProvider.tssrc/main/presenter/llmProviderPresenter/providers/siliconcloudProvider.tssrc/main/presenter/llmProviderPresenter/providers/togetherProvider.tssrc/main/presenter/llmProviderPresenter/providers/voiceAIProvider.tssrc/main/presenter/llmProviderPresenter/providers/zhipuProvider.tssrc/renderer/src/components/settings/ModelConfigDialog.vuesrc/renderer/src/stores/modelStore.tssrc/renderer/src/stores/ollamaStore.tssrc/shared/modelConfigDefaults.tssrc/shared/types/presenters/legacy.presenters.d.tstest/main/presenter/llmProviderPresenter/openAIResponsesProvider.test.tstest/main/presenter/modelConfig.test.tstest/main/presenter/providerDbModelConfig.test.tstest/renderer/components/ChatStatusBar.test.tstest/renderer/stores/modelStore.test.ts
| // Search-related parameters | ||
| enableSearch?: boolean | ||
| forcedSearch?: boolean | ||
| searchStrategy?: 'turbo' | 'balanced' | 'precise' |
There was a problem hiding this comment.
Unify the searchStrategy union across the presenter types.
ModelConfig now allows 'balanced' | 'precise', but the same file still exposes 'turbo' | 'max' in IConfigPresenter.getSearchDefaults() (Line 501) and DefaultModelSetting.searchStrategy (Line 1788). That makes the new values impossible to represent consistently through the defaults/config APIs and risks lossy round-trips.
Proposed fix
+export type SearchStrategy = 'turbo' | 'balanced' | 'precise'
+
export interface ModelConfig {
maxTokens: number
contextLength: number
@@
// Search-related parameters
enableSearch?: boolean
forcedSearch?: boolean
- searchStrategy?: 'turbo' | 'balanced' | 'precise'
+ searchStrategy?: SearchStrategy
}
@@
getSearchDefaults?(
providerId: string,
modelId: string
- ): { default?: boolean; forced?: boolean; strategy?: 'turbo' | 'max' }
+ ): { default?: boolean; forced?: boolean; strategy?: SearchStrategy }
@@
export interface DefaultModelSetting {
@@
enableSearch?: boolean
forcedSearch?: boolean
- searchStrategy?: 'turbo' | 'max'
+ searchStrategy?: SearchStrategyIf the storage layer still only supports legacy values, it should be updated in lockstep or explicitly mapped at the boundary instead of widening only ModelConfig.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| // Search-related parameters | |
| enableSearch?: boolean | |
| forcedSearch?: boolean | |
| searchStrategy?: 'turbo' | 'balanced' | 'precise' | |
| // Search-related parameters | |
| enableSearch?: boolean | |
| forcedSearch?: boolean | |
| searchStrategy?: SearchStrategy |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/shared/types/presenters/legacy.presenters.d.ts` around lines 171 - 174,
The searchStrategy union is inconsistent between ModelConfig and
presenter/defaults; update IConfigPresenter.getSearchDefaults(),
DefaultModelSetting.searchStrategy and any other presenter/type declarations
(reference symbols: searchStrategy, IConfigPresenter.getSearchDefaults,
DefaultModelSetting, ModelConfig) to use a single unified union ('balanced' |
'precise') or, if the storage layer still requires legacy values, implement an
explicit mapping at the presenter boundary that converts between legacy
('turbo'|'max') and new ('balanced'|'precise') values so defaults, config APIs
and storage remain consistent and round-trip lossless.
Summary by CodeRabbit
New Features
Improvements