Skip to content

Commit 6f09730

Browse files
authored
fix: Use metrics server addon in HPA lab (#1258)
1 parent 58a9228 commit 6f09730

File tree

4 files changed

+2
-68
lines changed

4 files changed

+2
-68
lines changed

manifests/modules/autoscaling/workloads/hpa/.workshop/terraform/main.tf

-16
This file was deleted.

manifests/modules/autoscaling/workloads/hpa/.workshop/terraform/vars.tf

-35
This file was deleted.

website/docs/autoscaling/workloads/horizontal-pod-autoscaler/index.md

-15
Original file line numberDiff line numberDiff line change
@@ -7,21 +7,6 @@ description: "Automatically scale workloads on Amazon Elastic Kubernetes Service
77

88
::required-time
99

10-
:::tip Before you start
11-
Prepare your environment for this section:
12-
13-
```bash timeout=300 wait=30
14-
$ prepare-environment autoscaling/workloads/hpa
15-
```
16-
17-
This will make the following changes to your lab environment:
18-
19-
- Install the Kubernetes Metrics Server in the Amazon EKS cluster
20-
21-
You can view the Terraform that applies these changes [here](https://github.com/VAR::MANIFESTS_OWNER/VAR::MANIFESTS_REPOSITORY/tree/VAR::MANIFESTS_REF/manifests/modules/autoscaling/workloads/hpa/.workshop/terraform).
22-
23-
:::
24-
2510
In this lab, we'll look at the Horizontal Pod Autoscaler (HPA) to scale pods in a deployment or replica set. It's implemented as a K8s API resource and a controller. The resource determines the behavior of the controller. The Controller Manager queries the resource utilization against the metrics specified in each HorizontalPodAutoscaler definition. The controller periodically adjusts the number of replicas in a replication controller or deployment to the target specified by the user by observing metrics such as average CPU utilization, average memory utilization or any other custom metric. It obtains the metrics from either the resource metrics API (for per-pod resource metrics), or the custom metrics API (for all other metrics).
2611

2712
The Kubernetes Metrics Server is a scalable and efficient aggregator of resource usage data in your cluster. It provides container metrics that are required by the Horizontal Pod Autoscaler. The metrics server is not deployed by default in Amazon EKS clusters.

website/docs/autoscaling/workloads/horizontal-pod-autoscaler/metric-server.md

+2-2
Original file line numberDiff line numberDiff line change
@@ -3,9 +3,9 @@ title: "Metric server"
33
sidebar_position: 5
44
---
55

6-
The Kubernetes Metrics Server is an aggregator of resource usage data in your cluster, and it is not deployed by default in Amazon EKS clusters. For more information, see [Kubernetes Metrics Server](https://github.com/kubernetes-sigs/metrics-server) on GitHub. The Metrics Server is commonly used by other Kubernetes add-ons, such as the Horizontal Pod Autoscaler or the Kubernetes Dashboard. For more information, see [Resource metrics pipeline](https://kubernetes.io/docs/tasks/debug/debug-cluster/resource-metrics-pipeline/) in the Kubernetes documentation. In this lab exercise, we'll deploy the Kubernetes Metrics Server on our Amazon EKS cluster.
6+
The Kubernetes Metrics Server is an aggregator of resource usage data in your cluster, and it is not deployed by default in Amazon EKS clusters. For more information, see [Kubernetes Metrics Server](https://github.com/kubernetes-sigs/metrics-server) on GitHub. The Metrics Server is commonly used by other Kubernetes add-ons, such as the Horizontal Pod Autoscaler or the Kubernetes Dashboard. For more information, see [Resource metrics pipeline](https://kubernetes.io/docs/tasks/debug/debug-cluster/resource-metrics-pipeline/) in the Kubernetes documentation.
77

8-
The Metric Server has been set up in advance in our cluster for this workshop:
8+
The Metrics Server was deployed to our cluster as an EKS community addon when the cluster was created:
99

1010
```bash
1111
$ kubectl -n kube-system get pod -l app.kubernetes.io/name=metrics-server

0 commit comments

Comments
 (0)