-
Notifications
You must be signed in to change notification settings - Fork 567
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Dynamic provisioning ignores securityContext #1202
Comments
Some other things we've tried today:
|
Hi @benethon , thank you for bringing this to our attention, apologies for the delayed response. I have reproduced this behavior on v1.7.1 of the driver, with k8s v1.28. We will have to push out a PR to address this so that the securityContext is not ignored.
|
@seanzatzdev-amazon Do we have any progress on this? I am facing this issue in k8s 1.29 too. @benethon Did you face the issue in k8s version < 1.27? if yes, then how did you work around it (if you managed to by any chance)? |
@snowmanstark no, we didn't try anything less than 1.27. Worked around it temporarily by using an EBS volume rather than EFS |
@nishant221 does your PR #1152 fix for this issue? if yes, then did 1.7.6 release ship #1152? |
@seanzatzdev-amazon Is the fix for this issue in 1.7.6 release? |
@seanzatzdev-amazon Is there any update on this fix. My stateful set is completely useless without this being fixed |
Same problem here. Mounted as 1002
context is 1000
pvc
SC
|
I had to use a work around and make a new storage class on which I set the gid and uid to what I need. And then in my statefulset I use this new storage class. Hope that helps. If you would like to see the manifest let me know and I can share that I get home. |
Thats what I had to do as well, create a specific SC just for this app. Which is fine, but its def. not ideal. |
Create a storage-class with a fix groupid 1000 works for me:
And use:
And use:
|
The Kubernetes project currently lacks enough contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
/remove-lifecycle stale |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
Hi all, EFS CSI Driver uses access points for dynamic provisioning. When you create a volume, we dynamically select a GID from range for this PV, create an access point with this GID, and later mount the volume using this access point. When you use access points, access control is done on the EFS side and identity will be enforced for all access through that AP. Even with mismatch of directory ownership and pod identity, the pod should still be able to access the volume. Changing the ownership of the access point root dir would mean that we can no longer access the volume through the access point. This means for dynamic provisioning with If you require the volume to be owned by a specific GID, we do support setting a static gid in the storage class which will be applied at access point creation time. Please let me know if any follow up questions/concerns |
/kind bug
What happened?
I have two clusters, one v1.27 named
eks-poc
and another new one created recently on v1.28 calledeks-dev
. Oneks-dev
, I noticed a problem that despite setting a securityContext to force the fsGroup user of the mounted volume to be 1035 [1], it doesn't respect this and instead sets it at 1999 (one off the upper limit set in the storage class [2]).We didn't have this problem on
eks-poc
, but we updated it this morning to v1.28 and the problem appeared, so to me it seems like the issue is related to Kubernetes 1.28.I installed the EFS driver manually on
eks-poc
and using the EKS Add-on oneks-dev
. The image of the efs-plugin container in the efs-csi-controller pod oneks-poc
is602401143452.dkr.ecr.eu-central-1.amazonaws.com/eks/aws-efs-csi-driver:v1.5.7
and the image oneks-dev
is602401143452.dkr.ecr.eu-central-1.amazonaws.com/eks/aws-efs-csi-driver:v1.7.1
- so different versions, but the common thing that makes it stop working is Kubernetes 1.28.Some other thing's we've tried: rolling back the
eks-dev
EFS driver version to1.5.7
- still happens, but the POSIX user is now 1000 rather than 199x. I haven't checked the change-log but I assume it was switched around to count down from the maximum gId, as evidenced by the log line "Allocator found GID which is already in use: -1 - trying next one."What you expected to happen?
mounted volume to be owned by user 1035 from the securityContext, not the one set by the provisioner
How to reproduce it (as minimally and precisely as possible)?
[1] Statefulset YAML (also happening on other deployments, also note the command to override the entrypoint):
[2] StorageClass:
This outputs (with 1.5.7) - note the 1000 user, not 1035
Environment
kubectl version
): see abovePlease also attach debug logs to help us better diagnose
Attached
csi-provisioner.txt
efs-plugin.txt
The text was updated successfully, but these errors were encountered: