Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Gemini 2.5 Pro or DeepSeek V3 0324 not showing in /models / #3641

Open
MatthewZMD opened this issue Mar 25, 2025 · 10 comments
Open

Gemini 2.5 Pro or DeepSeek V3 0324 not showing in /models / #3641

MatthewZMD opened this issue Mar 25, 2025 · 10 comments
Labels

Comments

@MatthewZMD
Copy link
Contributor

MatthewZMD commented Mar 25, 2025

I have been using /models / to get a list of available models to use and based Aidermacs to select from the list, I've very happy that Gemini 2.5 Pro and the latest deepseek are supported in Aider now, but I don't see them in the list. Am I understanding/using /models / correctly? Does anyone know why?

On Latest Aider 0.79.

@necaris
Copy link

necaris commented Mar 25, 2025

In the same boat -- confused because this commit seems to have added support for it but I'm also not seeing it in the /models output.

@MatthewZMD
Copy link
Contributor Author

MatthewZMD commented Mar 25, 2025

The models available in openrouter seems to be quite limited too:

──────────────────
> /models openrouter
Models which match "openrouter":
- openrouter/anthropic/claude-2
- openrouter/anthropic/claude-3-5-haiku
- openrouter/anthropic/claude-3-5-haiku-20241022
- openrouter/anthropic/claude-3-haiku
- openrouter/anthropic/claude-3-haiku-20240307
- openrouter/anthropic/claude-3-opus
- openrouter/anthropic/claude-3-sonnet
- openrouter/anthropic/claude-3.5-sonnet
- openrouter/anthropic/claude-3.5-sonnet:beta
- openrouter/anthropic/claude-3.7-sonnet
- openrouter/anthropic/claude-3.7-sonnet:beta
- openrouter/anthropic/claude-instant-v1
- openrouter/cognitivecomputations/dolphin-mixtral-8x7b
- openrouter/cohere/command-r-plus
- openrouter/databricks/dbrx-instruct
- openrouter/deepseek/deepseek-chat
- openrouter/deepseek/deepseek-coder
- openrouter/deepseek/deepseek-r1
- openrouter/fireworks/firellava-13b
- openrouter/google/gemini-2.0-flash-001
- openrouter/google/gemini-pro-1.5
- openrouter/google/gemini-pro-vision
- openrouter/google/palm-2-chat-bison
- openrouter/google/palm-2-codechat-bison
- openrouter/gryphe/mythomax-l2-13b
- openrouter/jondurbin/airoboros-l2-70b-2.1
- openrouter/mancer/weaver
- openrouter/meta-llama/codellama-34b-instruct
- openrouter/meta-llama/llama-2-13b-chat
- openrouter/meta-llama/llama-2-70b-chat
- openrouter/meta-llama/llama-3-70b-instruct
- openrouter/meta-llama/llama-3-70b-instruct:nitro
- openrouter/meta-llama/llama-3-8b-instruct:extended
- openrouter/meta-llama/llama-3-8b-instruct:free
- openrouter/microsoft/wizardlm-2-8x22b:nitro
- openrouter/mistralai/mistral-7b-instruct
- openrouter/mistralai/mistral-7b-instruct:free
- openrouter/mistralai/mistral-large
- openrouter/mistralai/mixtral-8x22b-instruct
- openrouter/nousresearch/nous-hermes-llama2-13b
- openrouter/openai/gpt-3.5-turbo
- openrouter/openai/gpt-3.5-turbo-16k
- openrouter/openai/gpt-4
- openrouter/openai/gpt-4-vision-preview
- openrouter/openai/gpt-4o
- openrouter/openai/gpt-4o-2024-05-13
- openrouter/openai/o1
- openrouter/openai/o1-mini
- openrouter/openai/o1-mini-2024-09-12
- openrouter/openai/o1-preview
- openrouter/openai/o1-preview-2024-09-12
- openrouter/pygmalionai/mythalion-13b
- openrouter/qwen/qwen-2.5-coder-32b-instruct
- openrouter/undi95/remm-slerp-l2-13b
──────────────────────────────

@paul-gauthier
Copy link
Collaborator

Thanks for trying aider and filing this issue.

You can run it with:

aider --model gemini-2.5-pro

I will make sure it shows in the model list.

@aakkino
Copy link

aakkino commented Mar 27, 2025

Image
deepseek v3 0324 still can't be used, this is my yaml writing and terminal output

@paul-gauthier
Copy link
Collaborator

The latest version of aider v0.79.1 should show all the models it knows about with /models.

There is no new model name for DeepSeek V3 0324, it's just being served via the same deepseek/deepseek-chat name as always.

Here is /models gemini-2.5 working as expected:

> /models gemini-2.5

Models which match "gemini-2.5":
- gemini/gemini-2.5-pro-exp-03-25
- openrouter/google/gemini-2.5-pro-exp-03-25:free
- vertex_ai-language-models/vertex_ai/gemini-2.5-pro-exp-03-25
- vertex_ai/gemini-2.5-pro-exp-03-25

@MatthewZMD
Copy link
Contributor Author

Seems like the openrouter has the distinction between deepseek:

@larock22
Copy link

Seems like the openrouter has the distinction between deepseek:

Yeah I think that is the issue, I get this from openrouter.

Image

@paul-gauthier
Copy link
Collaborator

I've added the new open router endpoint.

The change is available in the main branch. You can get it by installing the latest version from github:

aider --install-main-branch

# or...

python -m pip install --upgrade --upgrade-strategy only-if-needed git+https://github.com/Aider-AI/aider.git

If you have a chance to try it, let me know if it works better for you.

@larock22
Copy link

Thank you Paul! I really appreciate this

@freckletonj
Copy link

freckletonj commented Mar 28, 2025

As of this moment, it's still not working:

Setting the model in .env makes it appear to work:

$ aider
─────────
Aider v0.79.3.dev6+gfb23b6c2
Main model: gemini/gemini-2.5-pro-exp-03-25 with diff-fenced edit format
Weak model: gemini/gemini-2.0-flash
Git repo: .git with 38 files
Repo-map: using 4096 tokens, auto refresh

But then the rate limit error shows it's actually trying to use "model": "gemini-2.0-pro-exp"

litellm.RateLimitError: litellm.RateLimitError: VertexAIException - b'{\n  "error": {\n    "code": 429,\n    "message": "You exceeded your current quota, please check your plan and billing 
details. For more information on this error, head to: https://ai.google.dev/gemini-api/docs/rate-limits.",\n    "status": "RESOURCE_EXHAUSTED",\n    "details": [\n      {\n        "@type": 
"type.googleapis.com/google.rpc.QuotaFailure",\n        "violations": [\n          {\n            "quotaMetric": "generativelanguage.googleapis.com/generate_content_free_tier_requests",\n  
"quotaId": "GenerateRequestsPerDayPerProjectPerModel-FreeTier",\n            "quotaDimensions": {\n              "model": "gemini-2.0-pro-exp",\n              "location": "global"\n        
},\n            "quotaValue": "50"\n          }\n        ]\n      },\n      {\n        "@type": "type.googleapis.com/google.rpc.Help",\n        "links": [\n          {\n            
"description": "Learn more about Gemini API quotas",\n            "url": "https://ai.google.dev/gemini-api/docs/rate-limits"\n          }\n        ]\n      },\n      {\n        "@type": 
"type.googleapis.com/google.rpc.RetryInfo",\n        "retryDelay": "4s"\n      }\n    ]\n  }\n}\n'

aider --install-main-branch doesn't make it available in /model

But it does appear in /models:

> /models gemini-2.5                                                                                                                                                                         

Models which match "gemini-2.5":
- gemini/gemini-2.5-pro-exp-03-25
- openrouter/google/gemini-2.5-pro-exp-03-25:free
- vertex_ai-language-models/vertex_ai/gemini-2.5-pro-exp-03-25
- vertex_ai/gemini-2.5-pro-exp-03-25 

aider --model gemini-2.5-pro also doesn't work. All of these methods make aider report:

$ aider
─────────
Aider v0.79.3.dev6+gfb23b6c2
Main model: gemini/gemini-2.5-pro-exp-03-25 with diff-fenced edit format

But then the rate limit always says it was actually using "model": "gemini-2.0-pro-exp"

I know it was working yesterday, My Google API console reported separate usage for both 2.0 and 2.5.

For the life of me I can't see the quota data from yesterday in the cloud console, but today it looks like this: when aider reports 2.5 usage, cloud console reports:

Image

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

6 participants