Skip to content

Remove old diffusers distribution#713

Merged
ilopezluna merged 2 commits intomainfrom
remove-old-diffusers-distribution
Feb 25, 2026
Merged

Remove old diffusers distribution#713
ilopezluna merged 2 commits intomainfrom
remove-old-diffusers-distribution

Conversation

@ilopezluna
Copy link
Contributor

Now diffusers is installed on demand (having the model pulled first) :

curl -s -X POST http://localhost:13434/engines/diffusers/v1/images/generations \
  -H "Content-Type: application/json" \
  -d '{
    "model": "ai/stable-diffusion",
    "prompt": "A cat sitting on a couch",
    "size": "512x512"
  }'| jq -r '.data[0].b64_json' | base64 -d > cat.png

Or explicitly:
MODEL_RUNNER_HOST=http://localhost:13434 docker model install-runner --backend diffusers

@ilopezluna ilopezluna requested a review from a team February 25, 2026 17:07
@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @ilopezluna, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request refactors the installation process for the diffusers backend, moving from a pre-distributed setup to an on-demand approach. This change streamlines the initial environment setup by deferring the installation of diffusers until it is actually needed, either through an explicit command or when a diffusers model is first requested. The new process leverages the existing model runner API for installation, aligning it with how other backends like vllm-metal are handled.

Highlights

  • Diffusers Backend Installation: Implemented a new on-demand installation mechanism for the diffusers backend, triggered via the model runner's API, ensuring it's only installed when explicitly requested or a model is pulled.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Changelog
  • cmd/cli/commands/install-runner.go
    • Added logic to handle on-demand installation of the diffusers backend, including checks for standalone runner availability and API calls to trigger installation.
Ignored Files
  • Ignored by pattern: .github/workflows/** (1)
    • .github/workflows/release.yml
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@sourcery-ai sourcery-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey - I've left some high level feedback:

  • In the diffusers-specific branch, you're unconditionally using desktopClient.InstallBackend(diffusers.Name) even after handling Moby/Cloud via ensureStandaloneRunnerAvailable; double‑check whether desktopClient is the correct client abstraction for non-desktop engine kinds or if this should instead route through the same runner client used elsewhere (similar to how vllm-metal is handled) to avoid environment-specific breakage.
Prompt for AI Agents
Please address the comments from this code review:

## Overall Comments
- In the diffusers-specific branch, you're unconditionally using `desktopClient.InstallBackend(diffusers.Name)` even after handling Moby/Cloud via `ensureStandaloneRunnerAvailable`; double‑check whether `desktopClient` is the correct client abstraction for non-desktop engine kinds or if this should instead route through the same runner client used elsewhere (similar to how vllm-metal is handled) to avoid environment-specific breakage.

Sourcery is free for open source - if you like our reviews please consider sharing them ✨
Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request updates the install-runner command to implement deferred installation for the diffusers backend. This change aligns the diffusers installation process with the existing vllm-metal backend, where the backend is installed on demand via the running model runner's API. The new logic ensures that for standalone contexts (Moby/Cloud), a base runner is available before attempting to install the diffusers backend.

engineKind := modelRunner.EngineKind()
if engineKind == types.ModelRunnerEngineKindMoby || engineKind == types.ModelRunnerEngineKindCloud {
if _, err := ensureStandaloneRunnerAvailable(cmd.Context(), asPrinter(cmd), debug); err != nil {
return fmt.Errorf("unable to initialize standalone model runner: %w", err)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The error message unable to initialize standalone model runner is a bit generic. To provide more specific context for debugging, consider including the backend name in the error message, for example, unable to initialize standalone model runner for diffusers backend.

Suggested change
return fmt.Errorf("unable to initialize standalone model runner: %w", err)
return fmt.Errorf("unable to initialize standalone model runner for diffusers backend: %w", err)

@ilopezluna ilopezluna merged commit 7270142 into main Feb 25, 2026
14 checks passed
@ilopezluna ilopezluna deleted the remove-old-diffusers-distribution branch February 25, 2026 17:26
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants