Skip to content

feat: implement default model configuration system#1330

Merged
zerob13 merged 1 commit intodevfrom
refactor/default-model-config
Mar 6, 2026
Merged

feat: implement default model configuration system#1330
zerob13 merged 1 commit intodevfrom
refactor/default-model-config

Conversation

@zerob13
Copy link
Collaborator

@zerob13 zerob13 commented Mar 6, 2026

  • Add shared modelConfigDefaults with standardized settings
  • Update configPresenter to use provider-driven model config
  • Refactor LLM providers to use new model config API
  • Update modelStore and ollamaStore for defaults support
  • Add/update tests for new model config behavior
  • Update ModelConfigDialog UI component

Summary by CodeRabbit

  • New Features

    • Added search configuration options (strategy: turbo, balanced, or precise) to model settings.
  • Improvements

    • Unified model capability defaults across all providers for consistent behavior.
    • Updated default model context length and token limits to improve consistency.
    • Enhanced model configuration to enable function call capabilities by default.

- Add shared modelConfigDefaults with standardized settings
- Update configPresenter to use provider-driven model config
- Refactor LLM providers to use new model config API
- Update modelStore and ollamaStore for defaults support
- Add/update tests for new model config behavior
- Update ModelConfigDialog UI component

Co-Authored-By: DimCode <noreply@dimcode.dev>
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Mar 6, 2026

📝 Walkthrough

Walkthrough

This PR centralizes model configuration defaults by introducing a shared module with resolver functions and constants. It replaces hardcoded capability values (context length, max tokens, vision, function call) across presenter and renderer implementations with centralized defaults, and adds search-related fields to model configuration types.

Changes

Cohort / File(s) Summary
Centralized Defaults Module
src/shared/modelConfigDefaults.ts
New module exporting DEFAULT_MODEL_* constants, DEFAULT_MODEL_CAPABILITY_FALLBACKS object, and resolver functions (resolveModelContextLength, resolveModelMaxTokens, resolveModelVision, resolveModelFunctionCall) for computing model capabilities with fallbacks.
Config Presenter Layer
src/main/presenter/configPresenter/index.ts, src/main/presenter/configPresenter/modelConfig.ts
Replaced hardcoded capability literals with resolver-based computations; added enableSearch, forcedSearch, searchStrategy fields to config objects; updated default config construction to spread DEFAULT_MODEL_CAPABILITY_FALLBACKS.
Provider-Specific Implementations
src/main/presenter/llmProviderPresenter/providers/dashscopeProvider.ts, src/main/presenter/llmProviderPresenter/providers/doubaoProvider.ts, src/main/presenter/llmProviderPresenter/providers/githubProvider.ts, src/main/presenter/llmProviderPresenter/providers/minimaxProvider.ts, src/main/presenter/llmProviderPresenter/providers/o3fanProvider.ts, src/main/presenter/llmProviderPresenter/providers/ollamaProvider.ts, src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts, src/main/presenter/llmProviderPresenter/providers/openAIResponsesProvider.ts, src/main/presenter/llmProviderPresenter/providers/siliconcloudProvider.ts, src/main/presenter/llmProviderPresenter/providers/togetherProvider.ts, src/main/presenter/llmProviderPresenter/providers/voiceAIProvider.ts, src/main/presenter/llmProviderPresenter/providers/zhipuProvider.ts
Replaced hardcoded contextLength, maxTokens, vision, and functionCall values with resolver functions or centralized constants; maintains existing model metadata construction logic without behavioral changes beyond value substitutions.
Type Definitions
src/shared/types/presenters/legacy.presenters.d.ts
Added three optional search-related properties to ModelConfig interface: enableSearch, forcedSearch, searchStrategy ('turbo' | 'balanced' | 'precise').
Renderer Layer
src/renderer/src/components/settings/ModelConfigDialog.vue, src/renderer/src/stores/modelStore.ts, src/renderer/src/stores/ollamaStore.ts
Updated default model metadata construction and normalization to use resolver functions and centralized constants; applied consistent capability defaults across all model normalization paths, user-defined config application, and merging operations.
Tests
test/main/presenter/llmProviderPresenter/openAIResponsesProvider.test.ts, test/main/presenter/modelConfig.test.ts, test/main/presenter/providerDbModelConfig.test.ts, test/renderer/components/ChatStatusBar.test.ts, test/renderer/stores/modelStore.test.ts
Updated test expectations to reflect new unified default values (contextLength: 16000, maxTokens: 4096); added test cases verifying fallback behavior for sparse model metadata and resolver-based capability defaults.

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~40 minutes

Possibly related PRs

  • #967: Adds reasoning and search defaults propagation to provider database, aligning with the search-related fields being added to ModelConfig.
  • #851: Implements the same web-search configuration fields (enableSearch, forcedSearch, searchStrategy) across model configuration and provider layers.
  • #973: Refactors model configuration to be capability-driven with centralized resolver logic, mirroring the same centralization approach in this PR.

Suggested labels

codex

Poem

🐰 With paws we hop and gather round,
The scattered defaults homeward bound,
Resolvers bright and constants clear,
Default havens far and near,
One shared place for all to know,
Where capability values grow! 🌱

🚥 Pre-merge checks | ✅ 3
✅ Passed checks (3 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title 'feat: implement default model configuration system' clearly and concisely describes the main objective of the changeset, which is implementing a centralized default model configuration system.
Docstring Coverage ✅ Passed No functions found in the changed files to evaluate docstring coverage. Skipping docstring coverage check.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
  • 📝 Generate docstrings (stacked PR)
  • 📝 Generate docstrings (commit on current branch)
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch refactor/default-model-config

Tip

Try Coding Plans. Let us write the prompt for your AI agent so you can ship faster (with fewer bugs).
Share your feedback on Discord.


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (4)
src/main/presenter/llmProviderPresenter/providers/ollamaProvider.ts (2)

342-373: ⚠️ Potential issue | 🟡 Minor

Keep the error fallback aligned with the shared context default.

Line 347 now uses DEFAULT_MODEL_CONTEXT_LENGTH, but the catch path on Line 369 still hardcodes 4096. The same model will report different context windows depending on whether showModelInfo() succeeds.

🩹 Proposed fix
       return {
         ...model,
         model_info: {
-          context_length: 4096,
+          context_length: DEFAULT_MODEL_CONTEXT_LENGTH,
           embedding_length: 512
         },
         capabilities: ['chat']
       }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/main/presenter/llmProviderPresenter/providers/ollamaProvider.ts` around
lines 342 - 373, The catch block in attachModelInfo falls back to a hardcoded
context_length of 4096 which is inconsistent with the success path that uses
DEFAULT_MODEL_CONTEXT_LENGTH; update the fallback to use
DEFAULT_MODEL_CONTEXT_LENGTH (and keep embedding_length and capabilities as-is)
so showModelInfo success and error paths return consistent model_info values for
the same model name.

82-91: ⚠️ Potential issue | 🟠 Major

Use the metadata returned by attachModelInfo() here.

listModels() already enriches each entry with model_info.context_length and capabilities, but this map still emits a generic default context length and no explicit capability flags. Locally discovered Ollama models therefore lose provider-reported context, vision, and tool-calling info even though it was already fetched.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/main/presenter/llmProviderPresenter/providers/ollamaProvider.ts` around
lines 82 - 91, The mapping in listModels() currently ignores metadata added by
attachModelInfo(): update the map that builds each model object (the arrow
mapping returning id/name/providerId/...) to use model.model_info.context_length
(falling back to DEFAULT_MODEL_CONTEXT_LENGTH) for contextLength, pull maxTokens
from model.model_info or keep DEFAULT_MODEL_MAX_TOKENS if absent, and propagate
capability flags from model.capabilities (e.g., set isVision/supportsToolCalling
or a capabilities field) so provider-reported vision/tool-calling/context info
is preserved; reference the existing symbols model.model_info and
model.capabilities in ollamaProvider.ts when applying these changes.
src/renderer/src/stores/modelStore.ts (2)

195-219: ⚠️ Potential issue | 🟠 Major

Preserve user-defined enableSearch when rehydrating renderer models.

This merge path re-applies vision and functionCall, but it never copies config.enableSearch. After saving a user-defined search setting, refreshProviderModels() will rebuild allProviderModels / enabledModels with the provider default instead of the saved one.

🩹 Proposed fix
         return {
           ...normalized,
           contextLength: resolveModelContextLength(
             config.contextLength ?? normalized.contextLength
           ),
           maxTokens: resolvedMaxTokens,
           vision: resolveModelVision(config.vision ?? normalized.vision),
           functionCall: resolveModelFunctionCall(config.functionCall ?? normalized.functionCall),
           reasoning: config.reasoning ?? normalized.reasoning ?? false,
+          enableSearch: config.enableSearch ?? normalized.enableSearch ?? false,
           type: config.type ?? normalized.type ?? ModelType.Chat
         }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/renderer/src/stores/modelStore.ts` around lines 195 - 219, When
rehydrating a model from modelConfigStore.getModelConfig you need to preserve a
user-defined enableSearch value; update the returned object built from
normalized so it includes enableSearch: config.enableSearch ??
normalized.enableSearch (so
refreshProviderModels()/allProviderModels/enabledModels use the saved setting),
similar to how vision, functionCall, reasoning and type are resolved via
resolveModelVision/resolveModelFunctionCall and the config fallbacks.

336-352: ⚠️ Potential issue | 🟠 Major

Normalize the llmP.getModelList() fallback through the same resolver path.

Lines 347-352 switch the main merge flow to the shared resolvers, but the later llmP.getModelList() branch still hardcodes 4096/2048 and functionCall: false. Any provider that falls back to that branch bypasses the new default-model system and ends up with different capabilities than the rest of the store.

Also applies to: 395-413

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/renderer/src/stores/modelStore.ts` around lines 336 - 352, The fallback
branch that builds a RENDERER_MODEL_META for llmP.getModelList() currently
hardcodes context/maxTokens/functionCall (4096/2048/false) and thus bypasses the
shared resolvers; update that fallback construction to call the same resolver
helpers used in normalizeStoredModel — e.g., use resolveModelContextLength(...),
resolveModelMaxTokens(...), resolveModelVision(...), and
resolveModelFunctionCall(...) when deriving contextLength, maxTokens, vision and
functionCall for models returned from llmP.getModelList(), and do the same fix
in the other similar block (the second fallback that mirrors
normalizeStoredModel) so all providers use the unified resolver path rather than
hardcoded defaults.
🧹 Nitpick comments (1)
src/shared/modelConfigDefaults.ts (1)

15-25: Consider adding resolveModelReasoning for completeness.

The module defines DEFAULT_MODEL_REASONING constant but doesn't export a corresponding resolver function like the other capabilities. For consistency and potential future use, consider adding:

export const resolveModelReasoning = (value: boolean | undefined | null): boolean =>
  value ?? DEFAULT_MODEL_REASONING

This isn't blocking since providers currently use Boolean(model.reasoning?.supported) directly, but adding it would complete the resolver pattern.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/shared/modelConfigDefaults.ts` around lines 15 - 25, Add a resolver for
the reasoning capability to mirror the existing pattern: implement and export
resolveModelReasoning that accepts (value: boolean | undefined | null) and
returns value ?? DEFAULT_MODEL_REASONING, similar to resolveModelVision and
resolveModelFunctionCall; ensure the function is exported alongside the other
resolvers and references the DEFAULT_MODEL_REASONING constant.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@src/shared/types/presenters/legacy.presenters.d.ts`:
- Around line 171-174: The searchStrategy union is inconsistent between
ModelConfig and presenter/defaults; update IConfigPresenter.getSearchDefaults(),
DefaultModelSetting.searchStrategy and any other presenter/type declarations
(reference symbols: searchStrategy, IConfigPresenter.getSearchDefaults,
DefaultModelSetting, ModelConfig) to use a single unified union ('balanced' |
'precise') or, if the storage layer still requires legacy values, implement an
explicit mapping at the presenter boundary that converts between legacy
('turbo'|'max') and new ('balanced'|'precise') values so defaults, config APIs
and storage remain consistent and round-trip lossless.

---

Outside diff comments:
In `@src/main/presenter/llmProviderPresenter/providers/ollamaProvider.ts`:
- Around line 342-373: The catch block in attachModelInfo falls back to a
hardcoded context_length of 4096 which is inconsistent with the success path
that uses DEFAULT_MODEL_CONTEXT_LENGTH; update the fallback to use
DEFAULT_MODEL_CONTEXT_LENGTH (and keep embedding_length and capabilities as-is)
so showModelInfo success and error paths return consistent model_info values for
the same model name.
- Around line 82-91: The mapping in listModels() currently ignores metadata
added by attachModelInfo(): update the map that builds each model object (the
arrow mapping returning id/name/providerId/...) to use
model.model_info.context_length (falling back to DEFAULT_MODEL_CONTEXT_LENGTH)
for contextLength, pull maxTokens from model.model_info or keep
DEFAULT_MODEL_MAX_TOKENS if absent, and propagate capability flags from
model.capabilities (e.g., set isVision/supportsToolCalling or a capabilities
field) so provider-reported vision/tool-calling/context info is preserved;
reference the existing symbols model.model_info and model.capabilities in
ollamaProvider.ts when applying these changes.

In `@src/renderer/src/stores/modelStore.ts`:
- Around line 195-219: When rehydrating a model from
modelConfigStore.getModelConfig you need to preserve a user-defined enableSearch
value; update the returned object built from normalized so it includes
enableSearch: config.enableSearch ?? normalized.enableSearch (so
refreshProviderModels()/allProviderModels/enabledModels use the saved setting),
similar to how vision, functionCall, reasoning and type are resolved via
resolveModelVision/resolveModelFunctionCall and the config fallbacks.
- Around line 336-352: The fallback branch that builds a RENDERER_MODEL_META for
llmP.getModelList() currently hardcodes context/maxTokens/functionCall
(4096/2048/false) and thus bypasses the shared resolvers; update that fallback
construction to call the same resolver helpers used in normalizeStoredModel —
e.g., use resolveModelContextLength(...), resolveModelMaxTokens(...),
resolveModelVision(...), and resolveModelFunctionCall(...) when deriving
contextLength, maxTokens, vision and functionCall for models returned from
llmP.getModelList(), and do the same fix in the other similar block (the second
fallback that mirrors normalizeStoredModel) so all providers use the unified
resolver path rather than hardcoded defaults.

---

Nitpick comments:
In `@src/shared/modelConfigDefaults.ts`:
- Around line 15-25: Add a resolver for the reasoning capability to mirror the
existing pattern: implement and export resolveModelReasoning that accepts
(value: boolean | undefined | null) and returns value ??
DEFAULT_MODEL_REASONING, similar to resolveModelVision and
resolveModelFunctionCall; ensure the function is exported alongside the other
resolvers and references the DEFAULT_MODEL_REASONING constant.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 89ec64e8-f464-4b4f-b032-ce4a25c862b5

📥 Commits

Reviewing files that changed from the base of the PR and between 9586719 and 1d37d27.

📒 Files selected for processing (24)
  • src/main/presenter/configPresenter/index.ts
  • src/main/presenter/configPresenter/modelConfig.ts
  • src/main/presenter/llmProviderPresenter/providers/dashscopeProvider.ts
  • src/main/presenter/llmProviderPresenter/providers/doubaoProvider.ts
  • src/main/presenter/llmProviderPresenter/providers/githubProvider.ts
  • src/main/presenter/llmProviderPresenter/providers/minimaxProvider.ts
  • src/main/presenter/llmProviderPresenter/providers/o3fanProvider.ts
  • src/main/presenter/llmProviderPresenter/providers/ollamaProvider.ts
  • src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts
  • src/main/presenter/llmProviderPresenter/providers/openAIResponsesProvider.ts
  • src/main/presenter/llmProviderPresenter/providers/siliconcloudProvider.ts
  • src/main/presenter/llmProviderPresenter/providers/togetherProvider.ts
  • src/main/presenter/llmProviderPresenter/providers/voiceAIProvider.ts
  • src/main/presenter/llmProviderPresenter/providers/zhipuProvider.ts
  • src/renderer/src/components/settings/ModelConfigDialog.vue
  • src/renderer/src/stores/modelStore.ts
  • src/renderer/src/stores/ollamaStore.ts
  • src/shared/modelConfigDefaults.ts
  • src/shared/types/presenters/legacy.presenters.d.ts
  • test/main/presenter/llmProviderPresenter/openAIResponsesProvider.test.ts
  • test/main/presenter/modelConfig.test.ts
  • test/main/presenter/providerDbModelConfig.test.ts
  • test/renderer/components/ChatStatusBar.test.ts
  • test/renderer/stores/modelStore.test.ts

Comment on lines +171 to +174
// Search-related parameters
enableSearch?: boolean
forcedSearch?: boolean
searchStrategy?: 'turbo' | 'balanced' | 'precise'
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Unify the searchStrategy union across the presenter types.

ModelConfig now allows 'balanced' | 'precise', but the same file still exposes 'turbo' | 'max' in IConfigPresenter.getSearchDefaults() (Line 501) and DefaultModelSetting.searchStrategy (Line 1788). That makes the new values impossible to represent consistently through the defaults/config APIs and risks lossy round-trips.

Proposed fix
+export type SearchStrategy = 'turbo' | 'balanced' | 'precise'
+
 export interface ModelConfig {
   maxTokens: number
   contextLength: number
@@
   // Search-related parameters
   enableSearch?: boolean
   forcedSearch?: boolean
-  searchStrategy?: 'turbo' | 'balanced' | 'precise'
+  searchStrategy?: SearchStrategy
 }
@@
   getSearchDefaults?(
     providerId: string,
     modelId: string
-  ): { default?: boolean; forced?: boolean; strategy?: 'turbo' | 'max' }
+  ): { default?: boolean; forced?: boolean; strategy?: SearchStrategy }
@@
 export interface DefaultModelSetting {
@@
   enableSearch?: boolean
   forcedSearch?: boolean
-  searchStrategy?: 'turbo' | 'max'
+  searchStrategy?: SearchStrategy

If the storage layer still only supports legacy values, it should be updated in lockstep or explicitly mapped at the boundary instead of widening only ModelConfig.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
// Search-related parameters
enableSearch?: boolean
forcedSearch?: boolean
searchStrategy?: 'turbo' | 'balanced' | 'precise'
// Search-related parameters
enableSearch?: boolean
forcedSearch?: boolean
searchStrategy?: SearchStrategy
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/shared/types/presenters/legacy.presenters.d.ts` around lines 171 - 174,
The searchStrategy union is inconsistent between ModelConfig and
presenter/defaults; update IConfigPresenter.getSearchDefaults(),
DefaultModelSetting.searchStrategy and any other presenter/type declarations
(reference symbols: searchStrategy, IConfigPresenter.getSearchDefaults,
DefaultModelSetting, ModelConfig) to use a single unified union ('balanced' |
'precise') or, if the storage layer still requires legacy values, implement an
explicit mapping at the presenter boundary that converts between legacy
('turbo'|'max') and new ('balanced'|'precise') values so defaults, config APIs
and storage remain consistent and round-trip lossless.

@zerob13 zerob13 merged commit 366f3d4 into dev Mar 6, 2026
2 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant