Skip to content

chore: Add Ollama model name mappings for Granite 4.1 to intrinsics adapter resolution#1085

Open
kndtran wants to merge 1 commit into
generative-computing:mainfrom
kndtran:add-ollama-granite41-mappings
Open

chore: Add Ollama model name mappings for Granite 4.1 to intrinsics adapter resolution#1085
kndtran wants to merge 1 commit into
generative-computing:mainfrom
kndtran:add-ollama-granite41-mappings

Conversation

@kndtran
Copy link
Copy Markdown

@kndtran kndtran commented May 15, 2026

cc: @frreiss


Misc PR

Type of PR

  • Bug Fix
  • New Feature
  • Documentation
  • Other

Description

  • Link to Issue: Fixes Add Ollama model name mappings for Granite 4.1 to intrinsics adapter resolution #1084

  • Add granite4.1:3b, granite4.1:8b, and granite4.1:30b to BASE_MODEL_TO_CANONICAL_NAME in mellea/formatters/granite/intrinsics/constants.py

  • These map Ollama model identifiers to their corresponding GGUF adapter directory names (e.g. granite4.1:3bgranite4.1_3b)

  • Follows the existing pattern for granite4:microgranite4_micro

Context

download_intrinsic_adapter() uses BASE_MODEL_TO_CANONICAL_NAME to normalize base model names before resolving LoRA/aLoRA adapter paths in the Granite Intrinsics Library.

When serving intrinsics adapters via Ollama, the base model name is the Ollama identifier (e.g. granite4.1:3b), not the HF model ID (ibm-granite/granite-4.1-3b). Without the mapping, the resolver searches for the literal path answerability/granite4.1:3b/lora/io.yaml instead of answerability/granite4.1_3b/lora/io.yaml, resulting in:

ValueError: Intrinsic 'answerability' as LoRA adapter on base model 'granite4.1:3b' not found...
Searched for path answerability/granite4.1:3b/lora/io.yaml

Testing

  • Tests added to the respective file if code was changed
  • New code has 100% coverage if code as added
  • Ensure existing tests and github automation passes (a maintainer will kick off the github automation when the rest of the PR is populated)

Attribution

  • AI coding assistants used

…resolution

The intrinsics adapter resolver uses BASE_MODEL_TO_CANONICAL_NAME to
normalize base model names before looking up LoRA/aLoRA adapter paths.
When serving via Ollama, the base model name is the Ollama identifier
(e.g. granite4.1:3b) which needs to map to the GGUF adapter directory
(e.g. granite4.1_3b). The mapping for granite4:micro already exists but
the Granite 4.1 equivalents were missing, causing ValueError on adapter
resolution.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
@kndtran kndtran requested a review from a team as a code owner May 15, 2026 20:44
@github-actions
Copy link
Copy Markdown
Contributor

A PR checklist has been appended to your description. Please complete it — your original text above the --- divider has been preserved.

Next time, you can pick a template directly from the PR description box to skip this step.

@psschwei psschwei changed the title Add Ollama model name mappings for Granite 4.1 to intrinsics adapter resolution chore: Add Ollama model name mappings for Granite 4.1 to intrinsics adapter resolution May 15, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Add Ollama model name mappings for Granite 4.1 to intrinsics adapter resolution

1 participant