Skip to content

feat: add MiniMax provider support#375

Open
ximiximi423 wants to merge 1 commit intoNevaMind-AI:mainfrom
ximiximi423:feature/add-minimax-provider
Open

feat: add MiniMax provider support#375
ximiximi423 wants to merge 1 commit intoNevaMind-AI:mainfrom
ximiximi423:feature/add-minimax-provider

Conversation

@ximiximi423
Copy link

Summary

  • Add MiniMax as a new LLM provider with OpenAI-compatible API
  • Support two models: MiniMax-M2.5 (default) and MiniMax-M2.5-highspeed
  • Both models support 204,800 tokens context window

Supported Models

Model ID Type Description
MiniMax-M2.5 Default Peak Performance. Ultimate Value. Master the Complex
MiniMax-M2.5-highspeed Optional Same performance, faster and more agile

Changes

  • Add MiniMaxLLMBackend class (inherits OpenAILLMBackend for OpenAI-compatible API)
  • Register MiniMax in both LLM and embedding backend registries in http_client.py
  • Add MINIMAX_API_KEY environment variable support with provider defaults in settings.py
  • Add __init__.py exports for the new backend
  • Add unit tests (tests/llm/test_minimax_provider.py)
  • Add integration test (tests/test_minimax.py)

Configuration Example

from memu.app import MemoryService

service = MemoryService(
    llm_profiles={
        "default": {
            "provider": "minimax",
            "base_url": "https://api.minimax.io/v1",
            "api_key": os.getenv("MINIMAX_API_KEY"),
            "chat_model": "MiniMax-M2.5",
        },
    },
)

API Documentation

Test Plan

  • Unit tests for backend, settings defaults, payload building, response parsing
  • Integration test following the same pattern as OpenRouter tests
  • Verified MiniMax backend registration in LLM_BACKENDS and embedding backends
  • Verified HTTPLLMClient works with minimax provider

Add MiniMax as a new LLM provider with OpenAI-compatible API.

Supported models:
- MiniMax-M2.5 (default) - Peak Performance, Ultimate Value
- MiniMax-M2.5-highspeed - Same performance, faster and more agile

Changes:
- Add MiniMaxLLMBackend (inherits OpenAILLMBackend)
- Register MiniMax in LLM and embedding backend registries
- Add MINIMAX_API_KEY environment variable support with provider defaults
- Add unit tests and integration test for MiniMax provider

API Documentation:
- https://platform.minimax.io/docs/api-reference/text-openai-api
- https://platform.minimax.io/docs/api-reference/text-anthropic-api
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant