You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository was archived by the owner on Nov 20, 2024. It is now read-only.
I have such an example - there is a server with 8 GPU's, each separated by 5 vgpu's. So I have a total of 40 vgpu for free node. Then I create a pod which requests 5 vgpus - and it is placed on the node on the first GPU. I thought that Kubernetes wouldn't place any pod on that GPU anymore. But if I add another pod with vgpu requests, it MAY be placed on the first GPU despite the fact the first pod already took the whole physical card. Is it possible to fix why Kubernetes doesn't respect many physical cards?