-
-
Notifications
You must be signed in to change notification settings - Fork 103
Issues: Blaizzy/mlx-vlm
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Author
Label
Projects
Milestones
Assignee
Sort
Issues list
Issue with Finetuning Mistral (Mistral-Small-3.1-24B-Instruct-2503-4bit)
#284
opened Mar 30, 2025 by
keshavpeswani
Checking Awni's video branch improvements are merged into main?
#280
opened Mar 27, 2025 by
southkorea2013
NAMO-R1 Model is Amazing! How to Convert it to MLX?
#277
opened Mar 26, 2025 by
yourappleintelligence
It seems that version v0.1.19 does not follow instructions and only describes images.
#259
opened Mar 19, 2025 by
swlee60
Gemma 3 models do not see the image when the prompt is too long
bug
Something isn't working
#242
opened Mar 12, 2025 by
asmeurer
Add FastAPI server
enhancement
New feature or request
good first issue
Good for newcomers
#241
opened Mar 12, 2025 by
Blaizzy
Any speed reference compare with candle or llama.cpp with Qwen2.5 VL 4B?
#240
opened Mar 12, 2025 by
MonolithFoundation
Ensure backwards compatibility with transformers
good first issue
Good for newcomers
#230
opened Mar 6, 2025 by
Blaizzy
Add support for Ovis 2 ?
enhancement
New feature or request
#212
opened Feb 22, 2025 by
alexgusevski
Models should not need to be re-loaded between back-to-back prompts
bug
Something isn't working
#210
opened Feb 21, 2025 by
neilmehta24
Unrecognized image processor in mlx-community/Qwen2.5-VL-7B-Instruct-4bit
#209
opened Feb 21, 2025 by
leoho0722
Previous Next
ProTip!
Updated in the last three days: updated:>2025-03-28.