-
Notifications
You must be signed in to change notification settings - Fork 2.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
patches cause an error with $patch:delete in files with multiple patches #5552
Comments
This issue is currently awaiting triage. SIG CLI takes a lead on issue triage for this repo, but any Kubernetes member can accept issues by applying the The Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Experiencing the same issues.. |
Hello @ginokok1996 , this issue still in waiting status. Only way to resolve this issue this is to divide blocks with "$patch: delete" to the separate files, but in case if you have a lot of such files this is not convenient As I see we had the same solution multiple blocks for kustomize 5.3.0 , and it was resolved So, I hope the same issue but with "$patch: delete" in blocks can be resolved. @koba1t , @natasha41575 could you help with this issue please? |
If someone wants a ugly hack for splitting their files in bash My file looked like this: $patch: delete
apiVersion: batch/v1
kind: CronJob
metadata:
name: <name of service>
---
$patch: delete
apiVersion: batch/v1
kind: CronJob
metadata:
name: <name of service> This for loop iterate over over each name: field and grabs the 4 rows above it (grep -B4) and puts the output in a file named -delete-patch.yaml for i in $(grep name: delete-patches.yaml | cut -d: -f 2| tr -d ' '); do sed "s/---//" delete-patches.yaml | grep -B4 $i > $i-delete-patch.yaml; done |
Still reproduces with Kustomize v5.5.0. |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
What happened?
patchesStrategicMerge was used for include a code that declare $path:delete in patches for rewrite output from bases and granually including some structure per environment (after initial declaration in bases)
After Kustomize version was upgraded kustomize from 4.5.x to 5.3.0 and upgrading to patches from patchesStrategicMerge it cause an error in a files multiple patches with $patch:delete
What did you expect to happen?
Running the command "kustomize build ./overlay/test" shows resources that have not been disabled for overlays.
How can we reproduce it (as minimally and precisely as possible)?
Run this in terminal or with a bash file and check output with patches and patchesStrategicMerge
Expected output
a proper output with workflow-suffix3 deployment declaration
Actual output
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x40 pc=0x9902cb]
goroutine 1 [running]:
sigs.k8s.io/kustomize/kyaml/yaml.(*RNode).Content(...)
sigs.k8s.io/kustomize/kyaml/yaml/rnode.go:707
sigs.k8s.io/kustomize/kyaml/yaml.(*RNode).getMapFieldValue(0x14002260b08?, {0x10476bfb1?, 0x7?})
sigs.k8s.io/kustomize/kyaml/yaml/rnode.go:420 +0x54
sigs.k8s.io/kustomize/kyaml/yaml.(*RNode).GetApiVersion(...)
sigs.k8s.io/kustomize/kyaml/yaml/rnode.go:402
sigs.k8s.io/kustomize/kyaml/resid.GvkFromNode(0x140017648b8?)
sigs.k8s.io/kustomize/kyaml/resid/gvk.go:32 +0x40
sigs.k8s.io/kustomize/api/resource.(*Resource).GetGvk(...)
sigs.k8s.io/kustomize/api/resource/resource.go:57
sigs.k8s.io/kustomize/api/resource.(*Resource).CurId(0x1400044e960)
Kustomize version
kustomize 5.3.0
Operating system
Linux
The text was updated successfully, but these errors were encountered: