|
16 | 16 | - [Labels and Selectors](#labels-and-selectors)
|
17 | 17 | - [Node Selector](#node-selector)
|
18 | 18 | - [Taints](#taints)
|
| 19 | + - [Resources Limits](#resources-limits) |
19 | 20 |
|
20 | 21 | ## Setup
|
21 | 22 |
|
@@ -255,6 +256,12 @@ Note: create an alias (`alias k=kubectl`) and get used to `k get no`
|
255 | 256 | `k get nodes -o json > some_nodes.json`
|
256 | 257 | </b></details>
|
257 | 258 |
|
| 259 | +<details> |
| 260 | +<summary>Check what labels one of your nodes in the cluster has</summary><br><b> |
| 261 | + |
| 262 | +`k get no minikube --show-labels` |
| 263 | +</b></details> |
| 264 | + |
258 | 265 | ## Services
|
259 | 266 |
|
260 | 267 | <details>
|
@@ -450,6 +457,42 @@ The selector doesn't match the label (cache vs cachy). To solve it, fix cachy so
|
450 | 457 |
|
451 | 458 | </b></details>
|
452 | 459 |
|
| 460 | +<details> |
| 461 | +<summary>Create a deployment called "pluck" using the image "redis" and make sure it runs 5 replicas</summary><br><b> |
| 462 | + |
| 463 | +`kubectl create deployment pluck --image=redis` |
| 464 | + |
| 465 | +`kubectl scale deployment pluck --replicas=5` |
| 466 | + |
| 467 | +</b></details> |
| 468 | + |
| 469 | +<details> |
| 470 | +<summary>Create a deployment with the following properties: |
| 471 | + |
| 472 | +* called "blufer" |
| 473 | +* using the image "python" |
| 474 | +* runs 3 replicas |
| 475 | +* all pods will be placed on a node that has the label "blufer" |
| 476 | +</summary><br><b> |
| 477 | + |
| 478 | +`kubectl create deployment blufer --image=python --replicas=3 -o yaml --dry-run=client > deployment.yaml` |
| 479 | + |
| 480 | +Add the following section (`vi deployment.yaml`): |
| 481 | + |
| 482 | +``` |
| 483 | +spec: |
| 484 | + affinity: |
| 485 | + nodeAffinity: |
| 486 | + requiredDuringSchedlingIgnoredDuringExecution: |
| 487 | + nodeSelectorTerms: |
| 488 | + - matchExpressions: |
| 489 | + - key: blufer |
| 490 | + operator: Exists |
| 491 | +``` |
| 492 | + |
| 493 | +`kubectl apply -f deployment.yaml` |
| 494 | +</b></details> |
| 495 | + |
453 | 496 | ### Troubleshooting Deployments
|
454 | 497 |
|
455 | 498 | <details>
|
@@ -671,4 +714,59 @@ Exit and save. The pod should be in Running state now.
|
671 | 714 | <summary>Remove an existing taint from one of the nodes in your cluster</summary><br><b>
|
672 | 715 |
|
673 | 716 | `k taint node minikube app=web:NoSchedule-`
|
| 717 | +</b></details> |
| 718 | +
|
| 719 | +## Resources Limits |
| 720 | +
|
| 721 | +<details> |
| 722 | +<summary>Check if there are any limits on one of the pods in your cluster</summary><br><b> |
| 723 | +
|
| 724 | +`kubectl describe po <POD_NAME> | grep -i limits` |
| 725 | +</b></details> |
| 726 | +
|
| 727 | +<details> |
| 728 | +<summary>Run a pod called "yay" with the image "python" and resources request of 64Mi memory and 250m CPU</summary><br><b> |
| 729 | +
|
| 730 | +`kubectl run yay --image=python --dry-run=client -o yaml > pod.yaml` |
| 731 | +
|
| 732 | +`vi pod.yaml` |
| 733 | +
|
| 734 | +``` |
| 735 | +spec: |
| 736 | + containers: |
| 737 | + - image: python |
| 738 | + imagePullPolicy: Always |
| 739 | + name: yay |
| 740 | + resources: |
| 741 | + requests: |
| 742 | + cpu: 250m |
| 743 | + memory: 64Mi |
| 744 | +``` |
| 745 | +
|
| 746 | +`kubectl apply -f pod.yaml` |
| 747 | +</b></details> |
| 748 | +
|
| 749 | +<details> |
| 750 | +<summary>Run a pod called "yay2" with the image "python". Make sure it has resources request of 64Mi memory and 250m CPU and the limits are 128Mi memory and 500m CPU</summary><br><b> |
| 751 | +
|
| 752 | +`kubectl run yay2 --image=python --dry-run=client -o yaml > pod.yaml` |
| 753 | +
|
| 754 | +`vi pod.yaml` |
| 755 | +
|
| 756 | +``` |
| 757 | +spec: |
| 758 | + containers: |
| 759 | + - image: python |
| 760 | + imagePullPolicy: Always |
| 761 | + name: yay2 |
| 762 | + resources: |
| 763 | + limits: |
| 764 | + cpu: 500m |
| 765 | + memory: 128Mi |
| 766 | + requests: |
| 767 | + cpu: 250m |
| 768 | + memory: 64Mi |
| 769 | +``` |
| 770 | +
|
| 771 | +`kubectl apply -f pod.yaml` |
674 | 772 | </b></details>
|
0 commit comments