Skip to content

Commit d2f681f

Browse files
author
abregman
committed
Add a couple of Kubernetes questions
Also updated CKA page.
1 parent 422a48a commit d2f681f

File tree

3 files changed

+185
-6
lines changed

3 files changed

+185
-6
lines changed

README.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@
22

33
:information_source:  This repo contains questions and exercises on various technical topics, sometimes related to DevOps and SRE
44

5-
:bar_chart:  There are currently **2406** exercises and questions
5+
:bar_chart:  There are currently **2415** exercises and questions
66

77
:books:  To learn more about DevOps and SRE, check the resources in [devops-resources](https://github.com/bregman-arie/devops-resources) repository
88

topics/kubernetes/CKA.md

+89-1
Original file line numberDiff line numberDiff line change
@@ -17,6 +17,8 @@
1717
- [Node Selector](#node-selector)
1818
- [Taints](#taints)
1919
- [Resources Limits](#resources-limits)
20+
- [Monitoring](#monitoring)
21+
- [Scheduler](#scheduler-1)
2022

2123
## Setup
2224

@@ -150,6 +152,24 @@ You can also run `k describe po POD_NAME`
150152
To count them: `k get po -l env=prod --no-headers | wc -l`
151153
</b></details>
152154

155+
<details>
156+
<summary>Create a static pod with the image <code>python</code> that runs the command <code>sleep 2017</code></summary><br><b>
157+
158+
First change to the directory tracked by kubelet for creating static pod: `cd /etc/kubernetes/manifests` (you can verify path by reading kubelet conf file)
159+
160+
Now create the definition/manifest in that directory
161+
`k run some-pod --image=python --command sleep 2017 --restart=Never --dry-run=client -o yaml > statuc-pod.yaml`
162+
</b></details>
163+
164+
<details>
165+
<summary>Describe how would you delete a static Pod
166+
</summary><br><b>
167+
168+
Locate the static Pods directory (look at `staticPodPath` in kubelet configuration file).
169+
170+
Go to that directory and remove the manifest/definition of the staic Pod (`rm <STATIC_POD_PATH>/<POD_DEFINITION_FILE>`)
171+
</b></details>
172+
153173
### Troubleshooting Pods
154174

155175
<details>
@@ -187,7 +207,7 @@ You can confirm with `kubectl describe po POD_NAME`
187207
</b></details>
188208

189209
<details>
190-
<summary>Run the following command: <code>kubectl run ohno --image=sheris</code>. Did it work? why not? fix it without removing the Pod and using any image you want</summary><br><b>
210+
<summary>Run the following command: <code>kubectl run ohno --image=sheris</code>. Did it work? why not? fix it without removing the Pod and using any image you would like</summary><br><b>
191211

192212
Because there is no such image `sheris`. At least for now :)
193213

@@ -200,6 +220,18 @@ To fix it, run `kubectl edit ohno` and modify the following line `- image: sheri
200220
One possible reason is that the scheduler which supposed to schedule Pods on nodes, is not running. To verify it, you can run `kubectl get po -A | grep scheduler` or check directly in `kube-system` namespace.
201221
</b></details>
202222

223+
<details>
224+
<summary>How to view the logs of a container running in a Pod?</summary><br><b>
225+
226+
`k logs POD_NAME`
227+
</b></details>
228+
229+
<details>
230+
<summary>There are two containers inside a Pod called "some-pod". What will happen if you run <code>kubectl logs some-pod</code></summary><br><b>
231+
232+
It won't work because there are two containers inside the Pod and you need to specify one of them with `kubectl logs POD_NAME -c CONTAINER_NAME`
233+
</b></details>
234+
203235
## Namespaces
204236

205237
<details>
@@ -769,4 +801,60 @@ spec:
769801
```
770802
771803
`kubectl apply -f pod.yaml`
804+
</b></details>
805+
806+
## Monitoring
807+
808+
<details>
809+
<summary>Deploy metrics-server</summary><br><b>
810+
811+
`kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml`
812+
</b></details>
813+
814+
<details>
815+
<summary>Using metrics-server, view the following:
816+
817+
* top performing nodes in the cluster
818+
* top performing Pods
819+
</summary><br><b>
820+
821+
* top nodes: `kubectl top nodes`
822+
* top pods: `kubectl top pods`
823+
824+
</b></details>
825+
826+
## Scheduler
827+
828+
<details>
829+
<summary>Can you deploy multiple schedulers?</summary><br><b>
830+
831+
Yes, it is possible. You can run another pod with a command similar to:
832+
833+
```
834+
spec:
835+
containers:
836+
- command:
837+
- kube-scheduler
838+
- --address=127.0.0.1
839+
- --leader-elect=true
840+
- --scheduler-name=some-custom-scheduler
841+
...
842+
```
843+
</b></details>
844+
845+
<details>
846+
<summary>Assuming you have multiple schedulers, how to know which scheduler was used for a given Pod?</summary><br><b>
847+
848+
Running `kubectl get events` you can see which scheduler was used.
849+
</b></details>
850+
851+
<details>
852+
<summary>You want to run a new Pod and you would like it to be scheduled by a custom schduler. How to achieve it?</summary><br><b>
853+
854+
Add the following to the spec of the Pod:
855+
856+
```
857+
spec:
858+
schedulerName: some-custom-scheduler
859+
```
772860
</b></details>

topics/kubernetes/README.md

+95-4
Original file line numberDiff line numberDiff line change
@@ -24,8 +24,8 @@ What's your goal?
2424
- [Nodes Commands](#nodes-commands)
2525
- [Pods](#pods-1)
2626
- [Static Pods](#static-pods)
27-
- [Pods - Commands](#pods---commands)
28-
- [Pods - Troubleshooting and Debugging](#pods---troubleshooting-and-debugging)
27+
- [Pods Commands](#pods-commands)
28+
- [Pods Troubleshooting and Debugging](#pods-troubleshooting-and-debugging)
2929
- [Labels and Selectors](#labels-and-selectors-1)
3030
- [Deployments](#deployments)
3131
- [Deployments Commands](#deployments-commands)
@@ -62,6 +62,7 @@ What's your goal?
6262
- [Taints](#taints)
6363
- [Resource Limits](#resource-limits)
6464
- [Resources Limits - Commands](#resources-limits---commands)
65+
- [Monitoring](#monitoring)
6566
- [Scenarios](#scenarios)
6667

6768
## Kubernetes Exercises
@@ -583,7 +584,16 @@ It might be that your config is in different path. To verify run `ps -ef | grep
583584
The key itself for defining the path of static Pods is `staticPodPath`. So if your config is in `/var/lib/kubelet/config.yaml` you can run `grep staticPodPath /var/lib/kubelet/config.yaml`.
584585
</b></details>
585586

586-
#### Pods - Commands
587+
<details>
588+
<summary>Describe how would you delete a static Pod
589+
</summary><br><b>
590+
591+
Locate the static Pods directory (look at `staticPodPath` in kubelet configuration file).
592+
593+
Go to that directory and remove the manifest/definition of the staic Pod (`rm <STATIC_POD_PATH>/<POD_DEFINITION_FILE>`)
594+
</b></details>
595+
596+
#### Pods Commands
587597

588598
<details>
589599
<summary>How to check to which worker node the pods were scheduled to? In other words, how to check on which node a certain Pod is running?</summary><br><b>
@@ -617,7 +627,7 @@ To count them: `k get po -l env=prod --no-headers | wc -l`
617627
`kubectl get pods --all-namespaces`
618628
</b></details>
619629

620-
#### Pods - Troubleshooting and Debugging
630+
#### Pods Troubleshooting and Debugging
621631

622632
<details>
623633
<summary>You try to run a Pod but it's in "Pending" state. What might be the reason?</summary><br><b>
@@ -637,6 +647,15 @@ Prints the logs for a container in a pod.
637647
Show details of a specific resource or group of resources.
638648
</b></details>
639649

650+
<details>
651+
<summary>Create a static pod with the image <code>python</code> that runs the command <code>sleep 2017</code></summary><br><b>
652+
653+
First change to the directory tracked by kubelet for creating static pod: `cd /etc/kubernetes/manifests` (you can verify path by reading kubelet conf file)
654+
655+
Now create the definition/manifest in that directory
656+
`k run some-pod --image=python --command sleep 2017 --restart=Never --dry-run=client -o yaml > statuc-pod.yaml`
657+
</b></details>
658+
640659
### Labels and Selectors
641660

642661
<details>
@@ -674,6 +693,18 @@ The API currently supports two types of selectors: equality-based and set-based.
674693
[Kuberenets.io](Labels can be used to select objects and to find collections of objects that satisfy certain conditions. In contrast, annotations are not used to identify and select objects. The metadata in an annotation can be small or large, structured or unstructured, and can include characters not permitted by labels.): "Labels can be used to select objects and to find collections of objects that satisfy certain conditions. In contrast, annotations are not used to identify and select objects. The metadata in an annotation can be small or large, structured or unstructured, and can include characters not permitted by labels."
675694
</b></details>
676695

696+
<details>
697+
<summary>How to view the logs of a container running in a Pod?</summary><br><b>
698+
699+
`k logs POD_NAME`
700+
</b></details>
701+
702+
<details>
703+
<summary>There are two containers inside a Pod called "some-pod". What will happen if you run <code>kubectl logs some-pod</code></summary><br><b>
704+
705+
It won't work because there are two containers inside the Pod and you need to specify one of them with `kubectl logs POD_NAME -c CONTAINER_NAME`
706+
</b></details>
707+
677708
### Deployments
678709

679710
<details>
@@ -2749,6 +2780,40 @@ True
27492780
False. The scheduler tries to find a node that meets the requirements/rules and if it doesn't it will schedule the Pod anyway.
27502781
</b></details>
27512782
2783+
<details>
2784+
<summary>Can you deploy multiple schedulers?</summary><br><b>
2785+
2786+
Yes, it is possible. You can run another pod with a command similar to:
2787+
2788+
```
2789+
spec:
2790+
containers:
2791+
- command:
2792+
- kube-scheduler
2793+
- --address=127.0.0.1
2794+
- --leader-elect=true
2795+
- --scheduler-name=some-custom-scheduler
2796+
...
2797+
```
2798+
</b></details>
2799+
2800+
<details>
2801+
<summary>Assuming you have multiple schedulers, how to know which scheduler was used for a given Pod?</summary><br><b>
2802+
2803+
Running `kubectl get events` you can see which scheduler was used.
2804+
</b></details>
2805+
2806+
<details>
2807+
<summary>You want to run a new Pod and you would like it to be scheduled by a custom schduler. How to achieve it?</summary><br><b>
2808+
2809+
Add the following to the spec of the Pod:
2810+
2811+
```
2812+
spec:
2813+
schedulerName: some-custom-scheduler
2814+
```
2815+
</b></details>
2816+
27522817
### Taints
27532818

27542819
<details>
@@ -2870,6 +2935,32 @@ spec:
28702935
`kubectl apply -f pod.yaml`
28712936
</b></details>
28722937

2938+
### Monitoring
2939+
2940+
<details>
2941+
<summary>What monitoring solutions are you familiar with in regards to Kubernetes?</summary><br><b>
2942+
2943+
There are many types of monitoring solutions for Kubernetes. Some open-source, some are in-memory, some of them cost money, ... here is a short list:
2944+
2945+
* metrics-server: in-memory open source monitoring
2946+
* datadog: $$$
2947+
* promethues: open source monitoring solution
2948+
2949+
</b></details>
2950+
2951+
<details>
2952+
<summary>Describe how the monitoring solution you are working with monitors Kubernetes and </summary><br><b>
2953+
2954+
This very much depends on what you chose to use. Let's address some of the solutions:
2955+
2956+
* metrics-server: an open source and free monitoring solution that uses the cAdvisor component of kubelet to retrieve information on the cluster and its resources and stores them in-memory.
2957+
Once installed, after some time you can run commands like `kubectl top node` and `kubectl top pod` to view performance metrics on nodes, pods and other resources.
2958+
2959+
TODO: add more monitoring solutions
2960+
2961+
</b></details>
2962+
2963+
28732964
### Scenarios
28742965

28752966
<details>

0 commit comments

Comments
 (0)