-
-
Notifications
You must be signed in to change notification settings - Fork 110
Add support for Gemma 3? #237
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Okay saw now that you already support it! Nice! The 1B one seems to not be able to be converted through this mlx_vlm library, but it is not able to be converted with mlx_lm library either right now. |
I already made a PR to MLX-LM to support 1B. |
You're very quick! Great work! Would that PR also work for text only use of larger models, like 27B? |
Yes, it will :) |
Thanks! |
I just tried the converted model in "--chat"-mode, but as response to a text-only query I get only "< pad >" as output |
@Blaizzy The 12b model does not seem to work for me, it just outputs a lot of nonsense... But, when you do add the compatibility, would it be possible to add fine-tuning? |
More people having same issue as above: lmstudio-ai/lmstudio-bug-tracker#513 |
This should be fixed! |
As far as I can tell all Gemma 3 models are multimodal except maybe 1B ones? Not sure but it says all of them on their HF.
The text was updated successfully, but these errors were encountered: