Help serving vllm with kuberay #11957
torsteinelv
announced in
Q&A
Replies: 1 comment 1 reply
-
To fix this TypeError: You can just add |
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Hello,
have been trying to get this setup working for a long long time and now im about to give up. I have been googling and reading on others issues and deployment, and I see people ask of help on kuberay's github page and the are recommended to ask on vllms page , I hope someone will be able to assist.
Currently I have 3 node 1 gpu in each of them and im able to get the deployment started on 3 nodes but get internal server error when i try to make a request to the openai api. Was hoping someone could help me here :)
current setup:
code
error:
Beta Was this translation helpful? Give feedback.
All reactions