feat: Add Apertis LLM provider support#668
Conversation
Add 12 model TOML files: - claude-opus-4-5-20251101 (Opus 4.5) - claude-opus-4-5-20251101-thinking (Opus 4.5 Thinking) - claude-sonnet-4.5 (Sonnet 4.5) - claude-sonnet-4.5-20250929-thinking (Sonnet 4.5 Thinking) - gpt-5.2-pro (GPT-5.2 Pro) - gpt-5.2 (GPT-5.2) - gpt-5.2-codex (GPT-5.2 Codex) - gemini-3-pro-preview (Gemini 3 Pro Preview) - grok-code-fast-1 (Grok Code Fast 1) - glm-4.7 (GLM 4.7) - glm-4.7-thinking (GLM 4.7 Thinking) - minimax-m2.1 (MiniMax M2.1)
There was a problem hiding this comment.
can u make this consistent w/ the others
There was a problem hiding this comment.
Done — replaced the embedded PNG with a proper 24x24 SVG using currentColor, matching the standard format used by other providers (e.g. Anthropic, OpenRouter). Also switched npm to our own SDK @apertis/ai-sdk-provider and expanded the model catalog to 36 representative models covering Claude, GPT, Gemini, DeepSeek, Grok, GLM, Qwen, Llama, Mistral, and MiniMax families. Will push the updated branch shortly.
| tool_call = true | ||
| open_weights = false | ||
| knowledge = "2025-06" | ||
|
|
There was a problem hiding this comment.
can u document the interleaved reasoning settings for the interleaved models.
There was a problem hiding this comment.
Done — all thinking models now include the [interleaved] section with field = "reasoning_content". This applies to: Claude Opus 4.6/4.5/4 thinking variants, Claude Sonnet 4.5 thinking, Claude Haiku 4.5 thinking, GPT-5.1 thinking, Grok 4.1 thinking, DeepSeek R1, DeepSeek V3.2 thinking, GLM-4.7/GLM-4.7 thinking, MiniMax M2.1, and Qwen3 Max thinking.
|
Superseded by #1303 — clean rewrite with dedicated |
Fixes #667
Summary
Adds Apertis AI as a new LLM provider to models.dev.
Changes
providers/apertis/provider.tomlwith provider configurationProvider Details
@ai-sdk/openai-compatiblehttps://api.apertis.ai/v1APERTIS_API_KEYNotes
This PR adds the initial provider configuration. Model TOML files will be added in a follow-up commit.