-
Notifications
You must be signed in to change notification settings - Fork 1.6k
KEP-5322: DRA: Handle permanent driver failures #5549
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
nojnhuh
commented
Sep 19, 2025
- One-line PR description: Add KEP to define how DRA drivers can report permanent failures and how the kubelet should handle those.
- Issue link: DRA: Handle permanent driver failures #5322
- Other comments:
KEP-5322: DRA: Handle permanent driver allocation failures
45d542c
to
444c9d6
Compare
I've filled out the Summary, Motivation, and User Stories sections in case there's any feedback there. I'm continuing to fill out the rest of the KEP. /cc @klueska @SergeyKanzhelev @lauralorenz @pohly @jackfrancis |
/wg device-management |
Fill out the rest of the doc
keps/sig-node/5322-dra-driver-permanent-allocation-failure/README.md
Outdated
Show resolved
Hide resolved
} | ||
``` | ||
|
||
### Kubelet Handling |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How do we "recommend" to protect from re-scheduling to that same node by scheduler? Should we recommend that the DRA driver MUST taint the resource (or delete it) before returning the permanent failure?
Or should we restart conversation on marking a node as "not suitable" for a Pod for a while?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, I think a taint is the recommended approach when the issue is with the device that got allocated. There may be cases though where the issue is in the opaque parameters associated with a request. In those instances where a driver could successfully allocate the same device with a different set of parameters, a taint might get in the way more than it would help.
I'll mention this in the doc.
keps/sig-node/5322-dra-driver-permanent-allocation-failure/README.md
Outdated
Show resolved
Hide resolved
for miscategorized errors to cause a Pod to terminally fail even when a | ||
subsequent attempt might allow the Pod's startup to progress. | ||
|
||
## Design Details |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What do we put into the pod status fields introduced in #4680 in this case?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Since #4680 describes only the health of devices and this KEP also takes into account the other parameters associated with a request, I think we should let DRA drivers decide how the Pod status is utilized independently of this KEP. "This device is unhealthy" I think is different enough from "this request cannot be fulfilled" that coupling the two features in some way might be confusing.
If the issue is that these permanent errors from the DRA driver will be hard to diagnose, would generating an Event when one of a Pod's requests permanently fails be enough?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I want to make sure we surface as much information as possible to explain why the pod didn't fit in. Pod Admission failure message may have some details, but it may be too shallow. I wonder if we will be better off by adding details intoe Pod Status for each device. This way we can surface information when ALL devices cannot fit vs. single device failed. Trying to squeeze all this infrormation in a single Event or single admission failure message may be very messy.
In any case, writing down exactly where information go so user can diagnose the admission problem, is needed for this KEP.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think an Event is actually already generated in this case, but I can try forcing this situation to make sure: https://github.com/kubernetes/kubernetes/blob/8ebc216c595158389fa20c4fff75a8c84cbe3fff/pkg/kubelet/kuberuntime/kuberuntime_manager.go#L1328
Overall through I agree that having a clear path to identity these issues is necessary. I'll document here that the kubelet logs and Events for the Pod will show the error but may need to be updated to notate the error as permanent or not. Happy to add whatever else we need to make these more visible though.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
so will we be able to tell customer all DRA devices that are misconfigured (various plugins and different devices)? Or the error will be generic and only for the first plugin? What if events are throttled after continuous rescheduling of a Pod to the same Node over and over? Imagine after an hour of retries user tryies to understand what happens, but there are no more events. Will pod admission failure indicate clearly what happened? Please specify.
Will it be possible to map the allocation error to the plugin and device (as in fields in #4680) will make it very easy to understand.
Also please add test cases ensuring that errors exposed have some minimal level of details
Addressing Sergey's feedback
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Retitle to "KEP-5322: DRA: Handle permanent driver failures" to match the feature gate name? Please also rename the issue.
"allocation failure" is misleading because allocation happens during scheduling.
keps/sig-node/5322-dra-driver-permanent-allocation-failure/README.md
Outdated
Show resolved
Hide resolved
--> | ||
|
||
- [X] Feature gate (also fill in values in `kep.yaml`) | ||
- Feature gate name: DRAHandlePermanentDriverFailures |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I was about to suggest shortening to DRAPermanentDriverFailures
, but --features DRAPermanentDriverFailures=true
looks like it asks for failures, so better not 😁
Rename, scrub usage of "allocate"
Address Patrick's other comments
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please add the PRR approvers by creating a file at keps/prod-readiness/sig-node/5322.yaml so this PR can be properly tracked for PRR review. You can refer to the examples in keps/prod-readiness/sig-node for guidance on how to populate the file.
Thank you! I've started a thread on Slack to see if any PRR reviewers are available to take this one. |
I think I made this change everywhere except the title of this PR: /retitle KEP-5322: DRA: Handle permanent driver failures |
@nojnhuh Would you also add a corresponding file under https://github.com/kubernetes/enhancements/tree/master/keps/prod-readiness/sig-node for this KEP and list me as the PRR reviewer? |
Add PRR YAML
Done, thanks! |
Remove beta PRR approver
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/approve
For Alpha PRR
This appears to handle a new opt-in feature controlled by a field in the recommended way.
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: jpbetz, nojnhuh The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
Update kep.yaml
// When true and a non-empty error is returned, indicates that the | ||
// error is permanent. Permanent errors are expected to fail | ||
// consistently for a given request and should not be retried. | ||
bool permanent_error = 3; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Instead of bool, can this be an enum of error type?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I was wondering if this permanent/non-permanent distinction is binary enough that a bool makes the most sense, but @johnbelamaric's point during the WG Device Management meeting today about potentially handling user errors (e.g. bad opaque config) differently from things like hardware problems is definitely harder to do without some more context.
I've updated to make this an enum to at least give us that extra flexibility.
Add @mrunalp as an approver
`bool permanent_error` -> `error_type` enum
Keywords which can automatically close issues and at(@) or hashtag(#) mentions are not allowed in commit messages. The list of commits with invalid commit messages:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here. |
I'll squash these commits once the PR is ready to merge and remove all the intermediate commit messages. I've been adding those mostly for myself to track changes here. |
Unknown = 0; | ||
// Permanent errors are expected to fail consistently | ||
// for a given request and should not be retried. | ||
Permanent = 1; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
should we distinguish "bad config" and "failed device"?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm trying to think of how the kubelet would behave differently if it knew about both.
Marking a Pod as Failed
when a device has failed and triggering a higher-level controller to delete that Pod and create a new one (which lands on a different node after the DRA driver taints that device) seems like a reasonable course of action.
How should bad config be handled? If rescheduling an equivalent Pod/ResourceClaim won't help, is marking the Pod Failed
not the right thing to do? Since ResourceClaims, ResourceClaimTemplates, and where those are referred to in Pods are all immutable, I don't know if there's a more appropriate status. The Pod and its ResourceClaims need to be deleted anyway.
If a Pod transitioning to a Failed
phase is common to both scenarios, then I think the only difference would be whether or not a taint is applied to the device. Since drivers can manage taints themselves, maybe the kubelet doesn't need to know the difference?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
My suggestion here (along the #5549 (comment)) is to introduce distinguishing of those early. At a minimum, the message to the user will be different. At maximum, we may need a different reaction on it. Retrying slowly with the device failure (waiting for the controller to decide what to do based on taints) vs. faiing the admission for bad config and marking node-device combination as not suitable.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The message from the driver does bubble up to an Event on the Pod at least. This is a synthetic example from the dra-example-driver showing invalid config, which could probably be made more explicit that it's indicating a user error. The message might instead indicate a problem with the device or some other kind of error. Is there a better place to mirror this message from the driver?
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 8s default-scheduler Successfully assigned gpu-test5/dep0-798858f588-nnpsj to dra-example-driver-cluster-worker
Warning FailedPrepareDynamicResources 8s kubelet Failed to prepare dynamic resources: prepare dynamic resources: NodePrepareResources failed for ResourceClaim dep0-798858f588-nnpsj-shared-gpus-jblsv: error preparing devices for claim 3cf849a7-5421-49e1-bb03-92fab77ad38f: prepare failed: error applying GPU config: too many partitions
I'm wondering though that if a Pod+ResourceClaim can never start because of how it's defined by a user, then Failed
isn't the right terminal outcome if that means Pod controllers will continually delete and recreate the same thing. But it's also wasteful for the kubelet to continuously check in on a Pod that we know won't ever succeed. Is there an existing well-known way we can express that a Pod will not be able to start on this Node or any other Node?
@klueska Can you think of any more that drivers can do to prevent misconfiguration in the first place? Are there cases that webhook validation can't catch?
prevent the replacement Pod from being allocated the same device which is | ||
expected to fail again the same way. | ||
|
||
When [device taints and tolerations](https://kep.k8s.io/5055) are available, DRA |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
so once tainted, the Pod will be deleted, correct? Why will we need kubelet to handle this case than in special way? Would it be best to let controller to delete the Pod in this case universally?
This way the only error DRA plugin will report as permanent is "bad config" - unfit device.
And all permanent failure race conditions will be handled the same way by the controller outside of this KEP.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think the kubelet needs to do anything special here other than mark the Pod as Failed
when a DRA driver returns a permanent error. This is the rough sequence of events I'd expect when a driver encounters a problem the allocated device.
- Pod is scheduled onto a Node
- kubelet invokes DRA driver to prepare resources
- DRA driver encounters a permanent error while preparing resources
- DRA driver taints the device
- DRA driver returns the permanent error to the kubelet
- kubelet marks the Pod as
Failed
- A higher-level Pod controller sees one of its Pods is
Failed
and tries to replace it - Scheduler allocates an untainted device for the replacement Pod
When the user-provided config is invalid and rescheduling won't help though, I can see where marking the Pod Failed
could cause Pod controllers to spin on the same error forever. @klueska How was the NVIDIA driver planning to handle permanent errors caused by user misconfiguration?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
My point here is that tainting of a device WILL result in pod deletion by the taint controller:
A new controller watches tainted devices and deletes pods using them unless they tolerate the device taint
Why will we have two differentness ways to delete a Pod when the device is tainted as permanently failed? Would be best to keep those unified. Otherwise - a millisecond difference on device health detection result in different error messages on the Pod.
This KEP may be useful for misconfiguration-based issues. And this is where the discussion on handling the crash loop backoff is very important
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The KEP mentions using NoSchedule
taints as a lowest common denominator which will prevent the same error from occurring later without affecting other Pods that happen to be running successfully already with that device. I see though how using a NoExecute
taint accomplishes the same overall goal.
@klueska Can you think of any reason we wouldn't instead encourage drivers to do what Sergey is suggesting and simply put a NoExecute
taint on the device and let the eviction controller delete the Pod? Such as if it's likely that a "permanent error" related to the device itself only affects new Pods vs. Pods already using it?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yeah the only reason I can see doing this rather than going taint direction is if a driver can allocate devices differently to different pods. like some pods requesting would be failed but others would not be. I would be kind of surprised that would happen, and maybe we need a more concrete use case to catch that situation if so
// Permanent errors are expected to fail consistently | ||
// for a given request and should not be retried. | ||
Permanent = 1; | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
shouldn't there be a transient
type or something? seems odd to classify all non permanent failures as unknown
Overall based on the feedback from @SergeyKanzhelev and @haircommander in #5549 (comment) about using |