You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardexpand all lines: README.md
+1-1
Original file line number
Diff line number
Diff line change
@@ -2,7 +2,7 @@
2
2
3
3
:information_source: This repo contains questions and exercises on various technical topics, sometimes related to DevOps and SRE
4
4
5
-
:bar_chart: There are currently **2406** exercises and questions
5
+
:bar_chart: There are currently **2415** exercises and questions
6
6
7
7
:books: To learn more about DevOps and SRE, check the resources in [devops-resources](https://github.com/bregman-arie/devops-resources) repository
Copy file name to clipboardexpand all lines: topics/kubernetes/CKA.md
+89-1
Original file line number
Diff line number
Diff line change
@@ -17,6 +17,8 @@
17
17
-[Node Selector](#node-selector)
18
18
-[Taints](#taints)
19
19
-[Resources Limits](#resources-limits)
20
+
-[Monitoring](#monitoring)
21
+
-[Scheduler](#scheduler-1)
20
22
21
23
## Setup
22
24
@@ -150,6 +152,24 @@ You can also run `k describe po POD_NAME`
150
152
To count them: `k get po -l env=prod --no-headers | wc -l`
151
153
</b></details>
152
154
155
+
<details>
156
+
<summary>Create a static pod with the image <code>python</code> that runs the command <code>sleep 2017</code></summary><br><b>
157
+
158
+
First change to the directory tracked by kubelet for creating static pod: `cd /etc/kubernetes/manifests` (you can verify path by reading kubelet conf file)
159
+
160
+
Now create the definition/manifest in that directory
<summary>Describe how would you delete a static Pod
166
+
</summary><br><b>
167
+
168
+
Locate the static Pods directory (look at `staticPodPath` in kubelet configuration file).
169
+
170
+
Go to that directory and remove the manifest/definition of the staic Pod (`rm <STATIC_POD_PATH>/<POD_DEFINITION_FILE>`)
171
+
</b></details>
172
+
153
173
### Troubleshooting Pods
154
174
155
175
<details>
@@ -187,7 +207,7 @@ You can confirm with `kubectl describe po POD_NAME`
187
207
</b></details>
188
208
189
209
<details>
190
-
<summary>Run the following command: <code>kubectl run ohno --image=sheris</code>. Did it work? why not? fix it without removing the Pod and using any image you want</summary><br><b>
210
+
<summary>Run the following command: <code>kubectl run ohno --image=sheris</code>. Did it work? why not? fix it without removing the Pod and using any image you would like</summary><br><b>
191
211
192
212
Because there is no such image `sheris`. At least for now :)
193
213
@@ -200,6 +220,18 @@ To fix it, run `kubectl edit ohno` and modify the following line `- image: sheri
200
220
One possible reason is that the scheduler which supposed to schedule Pods on nodes, is not running. To verify it, you can run `kubectl get po -A | grep scheduler` or check directly in `kube-system` namespace.
201
221
</b></details>
202
222
223
+
<details>
224
+
<summary>How to view the logs of a container running in a Pod?</summary><br><b>
225
+
226
+
`k logs POD_NAME`
227
+
</b></details>
228
+
229
+
<details>
230
+
<summary>There are two containers inside a Pod called "some-pod". What will happen if you run <code>kubectl logs some-pod</code></summary><br><b>
231
+
232
+
It won't work because there are two containers inside the Pod and you need to specify one of them with `kubectl logs POD_NAME -c CONTAINER_NAME`
@@ -583,7 +584,16 @@ It might be that your config is in different path. To verify run `ps -ef | grep
583
584
The key itself for defining the path of static Pods is `staticPodPath`. So if your config is in `/var/lib/kubelet/config.yaml` you can run `grep staticPodPath /var/lib/kubelet/config.yaml`.
584
585
</b></details>
585
586
586
-
#### Pods - Commands
587
+
<details>
588
+
<summary>Describe how would you delete a static Pod
589
+
</summary><br><b>
590
+
591
+
Locate the static Pods directory (look at `staticPodPath` in kubelet configuration file).
592
+
593
+
Go to that directory and remove the manifest/definition of the staic Pod (`rm <STATIC_POD_PATH>/<POD_DEFINITION_FILE>`)
594
+
</b></details>
595
+
596
+
#### Pods Commands
587
597
588
598
<details>
589
599
<summary>How to check to which worker node the pods were scheduled to? In other words, how to check on which node a certain Pod is running?</summary><br><b>
@@ -617,7 +627,7 @@ To count them: `k get po -l env=prod --no-headers | wc -l`
617
627
`kubectl get pods --all-namespaces`
618
628
</b></details>
619
629
620
-
#### Pods - Troubleshooting and Debugging
630
+
#### Pods Troubleshooting and Debugging
621
631
622
632
<details>
623
633
<summary>You try to run a Pod but it's in "Pending" state. What might be the reason?</summary><br><b>
@@ -637,6 +647,15 @@ Prints the logs for a container in a pod.
637
647
Show details of a specific resource or group of resources.
638
648
</b></details>
639
649
650
+
<details>
651
+
<summary>Create a static pod with the image <code>python</code> that runs the command <code>sleep 2017</code></summary><br><b>
652
+
653
+
First change to the directory tracked by kubelet for creating static pod: `cd /etc/kubernetes/manifests` (you can verify path by reading kubelet conf file)
654
+
655
+
Now create the definition/manifest in that directory
@@ -674,6 +693,18 @@ The API currently supports two types of selectors: equality-based and set-based.
674
693
[Kuberenets.io](Labels can be used to select objects and to find collections of objects that satisfy certain conditions. In contrast, annotations are not used to identify and select objects. The metadata in an annotation can be small or large, structured or unstructured, and can include characters not permitted by labels.): "Labels can be used to select objects and to find collections of objects that satisfy certain conditions. In contrast, annotations are not used to identify and select objects. The metadata in an annotation can be small or large, structured or unstructured, and can include characters not permitted by labels."
675
694
</b></details>
676
695
696
+
<details>
697
+
<summary>How to view the logs of a container running in a Pod?</summary><br><b>
698
+
699
+
`k logs POD_NAME`
700
+
</b></details>
701
+
702
+
<details>
703
+
<summary>There are two containers inside a Pod called "some-pod". What will happen if you run <code>kubectl logs some-pod</code></summary><br><b>
704
+
705
+
It won't work because there are two containers inside the Pod and you need to specify one of them with `kubectl logs POD_NAME -c CONTAINER_NAME`
706
+
</b></details>
707
+
677
708
### Deployments
678
709
679
710
<details>
@@ -2749,6 +2780,40 @@ True
2749
2780
False. The scheduler tries to find a node that meets the requirements/rules and if it doesn't it will schedule the Pod anyway.
2750
2781
</b></details>
2751
2782
2783
+
<details>
2784
+
<summary>Can you deploy multiple schedulers?</summary><br><b>
2785
+
2786
+
Yes, it is possible. You can run another pod with a command similar to:
2787
+
2788
+
```
2789
+
spec:
2790
+
containers:
2791
+
- command:
2792
+
- kube-scheduler
2793
+
- --address=127.0.0.1
2794
+
- --leader-elect=true
2795
+
- --scheduler-name=some-custom-scheduler
2796
+
...
2797
+
```
2798
+
</b></details>
2799
+
2800
+
<details>
2801
+
<summary>Assuming you have multiple schedulers, how to know which scheduler was used for a given Pod?</summary><br><b>
2802
+
2803
+
Running `kubectl get events` you can see which scheduler was used.
2804
+
</b></details>
2805
+
2806
+
<details>
2807
+
<summary>You want to run a new Pod and you would like it to be scheduled by a custom schduler. How to achieve it?</summary><br><b>
2808
+
2809
+
Add the following to the spec of the Pod:
2810
+
2811
+
```
2812
+
spec:
2813
+
schedulerName: some-custom-scheduler
2814
+
```
2815
+
</b></details>
2816
+
2752
2817
### Taints
2753
2818
2754
2819
<details>
@@ -2870,6 +2935,32 @@ spec:
2870
2935
`kubectl apply -f pod.yaml`
2871
2936
</b></details>
2872
2937
2938
+
### Monitoring
2939
+
2940
+
<details>
2941
+
<summary>What monitoring solutions are you familiar with in regards to Kubernetes?</summary><br><b>
2942
+
2943
+
There are many types of monitoring solutions for Kubernetes. Some open-source, some are in-memory, some of them cost money, ... here is a short list:
2944
+
2945
+
* metrics-server: in-memory open source monitoring
2946
+
* datadog: $$$
2947
+
* promethues: open source monitoring solution
2948
+
2949
+
</b></details>
2950
+
2951
+
<details>
2952
+
<summary>Describe how the monitoring solution you are working with monitors Kubernetes and </summary><br><b>
2953
+
2954
+
This very much depends on what you chose to use. Let's address some of the solutions:
2955
+
2956
+
* metrics-server: an open source and free monitoring solution that uses the cAdvisor component of kubelet to retrieve information on the cluster and its resources and stores them in-memory.
2957
+
Once installed, after some time you can run commands like `kubectl top node` and `kubectl top pod` to view performance metrics on nodes, pods and other resources.
0 commit comments