Skip to content

feat: add MiniMax as third LLM provider in Chapter 10#33

Open
octo-patch wants to merge 1 commit intoVersusControl:mainfrom
octo-patch:feature/add-minimax-provider
Open

feat: add MiniMax as third LLM provider in Chapter 10#33
octo-patch wants to merge 1 commit intoVersusControl:mainfrom
octo-patch:feature/add-minimax-provider

Conversation

@octo-patch
Copy link
Copy Markdown

Summary

Adds MiniMax as a third LLM provider alongside Google Gemini and GitHub Models in the AI Agent for DevOps tutorial (Chapter 10).

MiniMax exposes an OpenAI-compatible API, so it reuses ChatOpenAI from langchain-openai with a custom base_url — the same pattern already established for GitHub Models. This gives readers another provider option they can switch to with a single LLM_PROVIDER=minimax environment variable.

Changes (10 files, 543 additions)

  • New src/models/minimax.py — MiniMaxModel wrapper with temperature clamping (MiniMax requires > 0)
  • Updated src/models/factory.py — added minimax to provider dispatch
  • Updated src/models/__init__.py — exports MiniMaxModel
  • Updated src/config.py — added MINIMAX_API_KEY, MINIMAX_MODEL, MINIMAX_ENDPOINT config vars with validation
  • Updated .env.example — documented MiniMax env vars
  • Updated app.py — sidebar now shows MiniMax provider label
  • Updated README.md — added MiniMax to provider table and project structure
  • Updated 10-building-complex-agent-with-actions.md — added MiniMax code example, config, and validation in tutorial text
  • New tests/test_minimax.py — 21 unit tests (model init, temperature clamping, factory, config validation, exports)
  • New tests/test_minimax_integration.py — 3 integration tests (chat completion, tool binding, factory)

Models supported

Model Context Notes
MiniMax-M2.7 (default) 1M tokens Latest flagship model
MiniMax-M2.7-highspeed 1M tokens Faster variant

Test plan

  • All 21 unit tests pass (python -m pytest tests/test_minimax.py)
  • All 3 integration tests pass with live API key
  • Verify LLM_PROVIDER=minimax works with streamlit run app.py
  • Verify existing Gemini/GitHub providers still work unchanged

Add MiniMax (https://www.minimax.io) as a third LLM provider alongside
Google Gemini and GitHub Models in the AI Agent tutorial (Chapter 10).

MiniMax exposes an OpenAI-compatible API, so it reuses ChatOpenAI from
langchain-openai with a custom base_url -- the same pattern as GitHub
Models. Temperature is clamped to 0.01 minimum (MiniMax requires > 0).

Changes:
- New MiniMaxModel wrapper (src/models/minimax.py)
- Updated model factory to support LLM_PROVIDER=minimax
- Added MINIMAX_API_KEY/MODEL/ENDPOINT config vars with validation
- Updated .env.example, README, sidebar UI, and chapter tutorial text
- 21 unit tests + 3 integration tests

Co-Authored-By: Octopus <liyuan851277048@icloud.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant