[Bug]: VLLLm crash when running Qwen/Qwen2.5-Coder-32B-Instruct on two H100 GPUs #10296
Open
1 task done
Labels
bug
Something isn't working
Your current environment
The output of `python collect_env.py`
Model Input Dumps
No response
🐛 Describe the bug
VLLLm crash when running Qwen/Qwen2.5-Coder-32B-Instruct on two H100 GPUs
Command for reprduce:
Error message:
Before submitting a new issue...
The text was updated successfully, but these errors were encountered: