Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ createdAt: "2023-06-12T20:47:47Z"
description: |-
Disallow the following scenarios when deploying PodDisruptionBudgets or resources that implement the replica subresource (e.g. Deployment, ReplicationController, ReplicaSet, StatefulSet): 1. Deployment of PodDisruptionBudgets with .spec.maxUnavailable == 0 2. Deployment of PodDisruptionBudgets with .spec.minAvailable == .spec.replicas of the resource with replica subresource This will prevent PodDisruptionBudgets from blocking voluntary disruptions such as node draining.
https://kubernetes.io/docs/concepts/workloads/pods/disruptions/
digest: 13e278751ca1c53c905cc7d1126e182e8212a0724d1b05ca363870fb92daf926
digest: ee0e47576c883fcc92cfd191a418a8b748bc1db7f6d9b76482b015837debe2a9
license: Apache-2.0
homeURL: https://open-policy-agent.github.io/gatekeeper-library/website/poddisruptionbudget
keywords:
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -11,3 +11,5 @@ spec:
kinds: ["PodDisruptionBudget"]
- apiGroups: [""]
kinds: ["ReplicationController"]
- apiGroups: ["autoscaling/v2"]
Copy link

Copilot AI Nov 3, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Invalid apiGroups value: should be autoscaling not autoscaling/v2. The version belongs in the versions field, not the apiGroups field. This should be apiGroups: [\"autoscaling\"].

Suggested change
- apiGroups: ["autoscaling/v2"]
- apiGroups: ["autoscaling"]

Copilot uses AI. Check for mistakes.
kinds: ["HorizontalPodAutoscaler"]
Original file line number Diff line number Diff line change
@@ -0,0 +1,23 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment-allowed-1
namespace: default
labels:
app: nginx
spec:
selector:
matchLabels:
app: nginx
example: allowed-deployment-1
template:
metadata:
labels:
app: nginx
example: allowed-deployment-1
spec:
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80
Original file line number Diff line number Diff line change
@@ -0,0 +1,19 @@
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: nginx-hpa-allowed
namespace: default
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: nginx-deployment
minReplicas: 3
maxReplicas: 5
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 50
Original file line number Diff line number Diff line change
@@ -0,0 +1,19 @@
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: nginx-hpa-allowed
namespace: default
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: nginx-deployment
minReplicas: 3
maxReplicas: 5
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 50
Original file line number Diff line number Diff line change
@@ -0,0 +1,19 @@
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: nginx-hpa-disallowed
namespace: default
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: nginx-deployment-disallowed
minReplicas: 3
maxReplicas: 5
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 50
Original file line number Diff line number Diff line change
Expand Up @@ -45,3 +45,45 @@ tests:
- samples/poddisruptionbudget/example_inventory_disallowed.yaml
assertions:
- violations: yes
- name: hpa-allowed-min-available
object: samples/poddisruptionbudget/example_hpa_allowed1.yaml
inventory:
- samples/poddisruptionbudget/example_allowed_deployment_hpa.yaml
- samples/poddisruptionbudget/example_inventory_allowed1.yaml
assertions:
- violations: no
- name: hpa-allowed-min-available-inventory
object: samples/poddisruptionbudget/example_allowed_deployment_hpa.yaml
inventory:
- samples/poddisruptionbudget/example_hpa_allowed1.yaml
- samples/poddisruptionbudget/example_inventory_allowed1.yaml
assertions:
- violations: no
- name: hpa-allowed-max-unavailable
object: samples/poddisruptionbudget/example_hpa_allowed2.yaml
inventory:
- samples/poddisruptionbudget/example_allowed_deployment_hpa.yaml
- samples/poddisruptionbudget/example_inventory_allowed2.yaml
assertions:
- violations: no
- name: hpa-allowed-max-unavailable-inventory
object: samples/poddisruptionbudget/example_allowed_deployment_hpa.yaml
inventory:
- samples/poddisruptionbudget/example_hpa_allowed2.yaml
- samples/poddisruptionbudget/example_inventory_allowed2.yaml
assertions:
- violations: no
- name: hpa-disallowed-min-available
object: samples/poddisruptionbudget/example_hpa_disallowed.yaml
inventory:
- samples/poddisruptionbudget/example_disallowed_deployment.yaml
- samples/poddisruptionbudget/example_inventory_disallowed.yaml
assertions:
- violations: yes
- name: hpa-disallowed-min-available-inventory
object: samples/poddisruptionbudget/example_disallowed_deployment.yaml
inventory:
- samples/poddisruptionbudget/example_hpa_disallowed.yaml
- samples/poddisruptionbudget/example_inventory_disallowed.yaml
assertions:
- violations: yes
150 changes: 141 additions & 9 deletions artifacthub/library/general/poddisruptionbudget/1.0.4/template.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -32,11 +32,13 @@ spec:
rego: |
package k8spoddisruptionbudget

import future.keywords

violation[{"msg": msg}] {
input.review.kind.kind == "PodDisruptionBudget"
pdb := input.review.object

not valid_pdb_max_unavailable(pdb)
not valid_pdb_max_unavailable(input.review, pdb)
msg := sprintf(
"PodDisruptionBudget <%v> has maxUnavailable of 0, only positive integers are allowed for maxUnavailable",
[pdb.metadata.name],
Expand All @@ -51,7 +53,7 @@ spec:
labels := { [label, value] | some label; value := obj.spec.selector.matchLabels[label] }
count(matchLabels - labels) == 0

not valid_pdb_max_unavailable(pdb)
not valid_pdb_max_unavailable(obj, pdb)
msg := sprintf(
"%v <%v> has been selected by PodDisruptionBudget <%v> but has maxUnavailable of 0, only positive integers are allowed for maxUnavailable",
[obj.kind, obj.metadata.name, pdb.metadata.name],
Expand All @@ -61,7 +63,7 @@ spec:
violation[{"msg": msg}] {
obj := input.review.object
pdb := data.inventory.namespace[obj.metadata.namespace]["policy/v1"].PodDisruptionBudget[_]

matchLabels := { [label, value] | some label; value := pdb.spec.selector.matchLabels[label] }
labels := { [label, value] | some label; value := obj.spec.selector.matchLabels[label] }
count(matchLabels - labels) == 0
Expand All @@ -73,18 +75,148 @@ spec:
)
}

valid_pdb_min_available(obj, pdb) {
violation[{"msg": msg}] {
input.review.kind.kind == "HorizontalPodAutoscaler"
obj := input.review.object
pdb := data.inventory.namespace[obj.metadata.namespace]["policy/v1"].PodDisruptionBudget[_]
workload := data.inventory.namespace[obj.metadata.namespace][obj.spec.scaleTargetRef.apiVersion][obj.spec.scaleTargetRef.kind][_]
workload.metadata.name == obj.spec.scaleTargetRef.name

matchLabels := { [label, value] | some label; value := pdb.spec.selector.matchLabels[label] }
labels := { [label, value] | some label; value := workload.spec.selector.matchLabels[label] }
count(matchLabels - labels) == 0

not valid_pdb_min_available(obj, pdb)
msg := sprintf(
"%v <%v> is managed by HPA <%v> and selected by PDB <%v>, which would prevent any pods from being drained. HPA minReplicas is %v and PDB minAvailable is %v",
[workload.kind, workload.metadata.name, obj.metadata.name, pdb.metadata.name, obj.spec.minReplicas, pdb.spec.minAvailable],
)
}

violation[{"msg": msg}] {
input.review.kind.kind == "HorizontalPodAutoscaler"
obj := input.review.object
pdb := data.inventory.namespace[obj.metadata.namespace]["policy/v1"].PodDisruptionBudget[_]
workload := data.inventory.namespace[obj.metadata.namespace][obj.spec.scaleTargetRef.apiVersion][obj.spec.scaleTargetRef.kind][_]
workload.metadata.name == obj.spec.scaleTargetRef.name

matchLabels := { [label, value] | some label; value := pdb.spec.selector.matchLabels[label] }
labels := { [label, value] | some label; value := workload.spec.selector.matchLabels[label] }
count(matchLabels - labels) == 0

not valid_pdb_max_unavailable(obj, pdb)
msg := sprintf(
"%v <%v> is managed by HPA <%v> and selected by PDB <%v>, which would prevent any pods from being drained. HPA minReplicas is %v and PDB maxUnavailable is %v",
[workload.kind, workload.metadata.name, obj.metadata.name, pdb.metadata.name, obj.spec.minReplicas, pdb.spec.maxUnavailable],
)
}

# This rule triggers when a Deployment is created/updated and checks against existing HPAs and PDBs.
violation[{"msg": msg}] {
obj := input.review.object
pdb := data.inventory.namespace[obj.metadata.namespace]["policy/v1"].PodDisruptionBudget[_]
hpa := data.inventory.namespace[obj.metadata.namespace]["autoscaling/v2"].HorizontalPodAutoscaler[_]

# Match PDB to Deployment
matchLabelsPdb := { [label, value] | some label; value := pdb.spec.selector.matchLabels[label] }
labelsDep := { [label, value] | some label; value := obj.spec.selector.matchLabels[label] }
count(matchLabelsPdb - labelsDep) == 0

# Match HPA to Deployment
hpa.spec.scaleTargetRef.kind == obj.kind
hpa.spec.scaleTargetRef.name == obj.metadata.name

not valid_pdb_min_available(hpa, pdb)
msg := sprintf(
"%v <%v> is managed by HPA <%v> and selected by PDB <%v>, which would prevent any pods from being drained. HPA minReplicas is %v and PDB minAvailable is %v",
[obj.kind, obj.metadata.name, hpa.metadata.name, pdb.metadata.name, hpa.spec.minReplicas, pdb.spec.minAvailable],
)
}

# This rule triggers when a Deployment is created/updated and checks against existing HPAs and PDBs.
violation[{"msg": msg}] {
obj := input.review.object
pdb := data.inventory.namespace[obj.metadata.namespace]["policy/v1"].PodDisruptionBudget[_]
hpa := data.inventory.namespace[obj.metadata.namespace]["autoscaling/v2"].HorizontalPodAutoscaler[_]

# Match PDB to Deployment
matchLabelsPdb := { [label, value] | some label; value := pdb.spec.selector.matchLabels[label] }
labelsDep := { [label, value] | some label; value := obj.spec.selector.matchLabels[label] }
count(matchLabelsPdb - labelsDep) == 0

# Match HPA to Deployment
hpa.spec.scaleTargetRef.kind == obj.kind
hpa.spec.scaleTargetRef.name == obj.metadata.name

not valid_pdb_max_unavailable(hpa, pdb)
msg := sprintf(
"%v <%v> is managed by HPA <%v> and selected by PDB <%v>, which would prevent any pods from being drained. HPA minReplicas is %v and PDB maxUnavailable is %v",
[obj.kind, obj.metadata.name, hpa.metadata.name, pdb.metadata.name, hpa.spec.minReplicas, pdb.spec.maxUnavailable],
)
}

get_replicas(obj) = obj.spec.minReplicas if {
obj.kind == "HorizontalPodAutoscaler"
} else = obj.spec.replicas

min_available(obj, pdb) = new if {
# if its a percentage, it will return the number of pods that need
# to be available rounded up (that's how Kubernetes calculates it).
# if its a number, return that number
endswith(pdb.spec.minAvailable, "%")

# convert % to a number, if this is 50%, then 50/100 = 0.5
per := to_number(replace(pdb.spec.minAvailable, "%", "")) / 100

# round up to the nearest integer based on replicas
# if replicas is 3, then 3 * 0.5 = 1.5, ceil(1.5) = 2
new := ceil(get_replicas(obj) * per)
}

min_available(_, pdb) = new if {
is_number(pdb.spec.minAvailable)
new := object.get(pdb.spec, "minAvailable", -1)
}

min_available(_, pdb) = new if {
# default to -1 if minAvailable is not set so valid_pdb_min_available is always true
# for objects with >= 0 replicas. If minAvailable defaults to >= 0, objects with
# replicas field might violate this constraint if they are equal to the default set here
min_available := object.get(pdb.spec, "minAvailable", -1)
obj.spec.replicas > min_available
not pdb.spec.minAvailable
new := -1
}

valid_pdb_min_available(obj, pdb) if {
get_replicas(obj) > min_available(obj, pdb)
}

valid_pdb_max_unavailable(pdb) {
max_unavailable(obj, pdb) = new if {
# if its a percentage, it will return the number of pods that need
# to be available rounded down (that's how Kubernetes calculates it).
# if its a number, return that number, if unset return default of 1
endswith(pdb.spec.maxUnavailable, "%")

# convert % to a number, if this is 50%, then 50/100 = 0.5
per := to_number(replace(pdb.spec.maxUnavailable, "%", "")) / 100

# round down to the nearest integer based on replicas
# if replicas is 3, then 3 * 0.5 = 1.5, floor(1.5) = 1
new := floor(get_replicas(obj) * per)
}

max_unavailable(_, pdb) = new if {
is_number(pdb.spec.maxUnavailable)
new := object.get(pdb.spec, "maxUnavailable", 1)
}

max_unavailable(_, pdb) = new if {
# default to 1 if maxUnavailable is not set so valid_pdb_max_unavailable always returns true.
# If maxUnavailable defaults to 0, it violates this constraint because all pods needs to be
# available and no pods can be evicted voluntarily
max_unavailable := object.get(pdb.spec, "maxUnavailable", 1)
max_unavailable > 0
not pdb.spec.maxUnavailable
new := 1
}

valid_pdb_max_unavailable(obj, pdb) if {
max_unavailable(obj, pdb) > 0
}
Original file line number Diff line number Diff line change
@@ -0,0 +1,25 @@
version: 1.1.0
name: k8spoddisruptionbudget
displayName: Pod Disruption Budget
createdAt: "2025-08-05T15:07:39Z"
description: |-
Disallow the following scenarios when deploying PodDisruptionBudgets or resources that implement the replica subresource (e.g. Deployment, ReplicationController, ReplicaSet, StatefulSet): 1. Deployment of PodDisruptionBudgets with .spec.maxUnavailable == 0 2. Deployment of PodDisruptionBudgets with .spec.minAvailable == .spec.replicas of the resource with replica subresource This will prevent PodDisruptionBudgets from blocking voluntary disruptions such as node draining.
https://kubernetes.io/docs/concepts/workloads/pods/disruptions/
digest: f46013e1ef184c26627e520fdc10c3b0ca837d00689f5f8afe9f85ede519cb8e
license: Apache-2.0
homeURL: https://open-policy-agent.github.io/gatekeeper-library/website/poddisruptionbudget
keywords:
- gatekeeper
- open-policy-agent
- policies
readme: |-
# Pod Disruption Budget
Disallow the following scenarios when deploying PodDisruptionBudgets or resources that implement the replica subresource (e.g. Deployment, ReplicationController, ReplicaSet, StatefulSet): 1. Deployment of PodDisruptionBudgets with .spec.maxUnavailable == 0 2. Deployment of PodDisruptionBudgets with .spec.minAvailable == .spec.replicas of the resource with replica subresource This will prevent PodDisruptionBudgets from blocking voluntary disruptions such as node draining.
https://kubernetes.io/docs/concepts/workloads/pods/disruptions/
install: |-
### Usage
```shell
kubectl apply -f https://raw.githubusercontent.com/open-policy-agent/gatekeeper-library/master/artifacthub/library/general/poddisruptionbudget/1.1.0/template.yaml
```
provider:
name: Gatekeeper Library
Original file line number Diff line number Diff line change
@@ -0,0 +1,2 @@
resources:
- template.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,15 @@
apiVersion: constraints.gatekeeper.sh/v1beta1
kind: K8sPodDisruptionBudget
metadata:
name: pod-distruption-budget
spec:
match:
kinds:
- apiGroups: ["apps"]
kinds: ["Deployment", "StatefulSet"]
- apiGroups: ["policy"]
kinds: ["PodDisruptionBudget"]
- apiGroups: [""]
kinds: ["ReplicationController"]
- apiGroups: ["autoscaling"]
kinds: ["HorizontalPodAutoscaler"]
Loading