Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
53 changes: 52 additions & 1 deletion Task1/solution/pvc.yml
Original file line number Diff line number Diff line change
Expand Up @@ -8,4 +8,55 @@ spec:
- ReadWriteMany
resources:
requests:
storage: 10Gi
storage: 10Gi

## add a pods as below

apiVersion: v1
kind: Pod
metadata:
name: nginx-pod
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
volumes:
- name: my-pvc
persistentVolumeClaim:
claimName: my-pvc




apiVersion: apps/v1
kind: Deployment
metadata:
name: my-deployment
namespace: web-tier
labels:
app: my-deployment
spec:
replicas: 3
selector:
matchLabels:
app: my-deployment
strategy: {}
template:
metadata:
labels:
app: my-deployment
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 8080
resources: {}



## what wrong with this command below to create a service account pod
kubectl create serviceaccount my-serviceaccount --namespace=my-namespace --dry-run=client -o yaml | kubectl apply -f -

32 changes: 32 additions & 0 deletions Task1/task.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,3 +3,35 @@
• Assign the PVC to the pod named nginx-pod with image nginx and mount to path /usr/share/html
• Ensure the pod claims the volume as ReadWriteMany access mode
• Use kubectl patch or kubectl edit to update the capacity of the PVC as 70Gi to record the change.


## Below is my answer

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: my-pvc
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 10Gi
storageClassName: k8s-csi-plugin

---

apiVersion: v1
kind: Pod
metadata:
name: nginx-pod
spec:
containers:
- name: nginx-pod
image: nginx
volumeMounts:
- mountPath: "/usr/share/html"
name: my-pvc
volumes:
- name: my-pvc
persistentVolumeClaim:
15 changes: 15 additions & 0 deletions Task10/solution/pv.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,21 @@ kind: PersistentVolume
metadata:
name: my-vol
spec:
capacity:
storage: 2Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/path/to/file"


---
apiVersion: v1
kind: PersistentVolume
metadata:
name: my-vol
spec:
storageClassName: manual
capacity:
storage: 2Gi
accessModes:
Expand Down
38 changes: 37 additions & 1 deletion Task10/task.md
Original file line number Diff line number Diff line change
@@ -1 +1,37 @@
Create a persistent Volume with name my-vol, of capactiy 2Gi and access mode ReadWriteOnce. The type of volume is hostpath and its location is /path/to/file
Create a persistent Volume with name my-vol, of capactiy 2Gi and access mode ReadWriteOnce. The type of volume is hostpath and its location is /path/to/file


## This is my answer

apiVersion: v1
kind: PersistentVolume
metadata:
name: my-vol
spec:
storageClassName: manual
capacity:
storage: 2Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/path/to/file"

## Chatgpt answer and why I came close to the correct answer

apiVersion: v1
kind: PersistentVolume
metadata:
name: my-vol
spec:
storageClassName: manual
capacity:
storage: 2Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain # optional, but good to include
persistentVolumeSource:
hostPath:
path: "/path/to/file"


## your answer is almost correct, but you're missing the persistentVolumeSource definition to properly define the hostPath. Here’s the corrected version of the YAML manifest:
20 changes: 19 additions & 1 deletion Task11/task.md
Original file line number Diff line number Diff line change
@@ -1,3 +1,21 @@
Upgrade master control plane components from version 1.20.0 to only version 1.20.1.
• Drain the master before start the upgrade and uncordn once the upgrade is completed
• Update the kubelet and kubeadm as well
• Update the kubelet and kubeadm as well


## Below is my answer to this questions

Kubectl cordon control plane

kubectl drain control plane --ignore-daemonsets --delete-emptydir-data

## the command to upgrade to control plane node goes here

sudo kubeadm upgrade apply v1.20.1

sudo apt-get install -y kubelet=1.20.1-00 kubeadm=1.20.1-00
sudo apt-mark hold kubelet kubeadm
sudo systemctl daemon-reload
sudo systemctl restart kubelet

Kubectl uncordon control plane
25 changes: 24 additions & 1 deletion Task13/task.md
Original file line number Diff line number Diff line change
@@ -1 +1,24 @@
The node k8s-node is not in ready state. Ssh to the node and troubleshoot the issue, make the changes permanent to avoid the problem in future.
The node k8s-node is not in ready state. Ssh to the node and troubleshoot the issue, make the changes permanent to avoid the problem in future.

## below is my answer or steps that I would take to slove this problem

kubectl get node

kubectl describe node k8s-node

ssh k8s-node

## After logging into k8s-node, look at kubelet agent to see if it's active

## Also look at other processes that are need to run k8s, see if those processes are active too

## If any processes that are need to run k8s are not active you would need to active them

## this is how I would slove this problem

systemctl status kubelet

sudo systemctl start kubelet

journalctl -u kubelet -f

12 changes: 12 additions & 0 deletions Task14/task.md
Original file line number Diff line number Diff line change
@@ -1 +1,13 @@
Check to see how many nodes are ready (not including nodes tainted NoSchedule) and write the number to /path/to/node


## Below is my answer to this problem

kubectl get nodes | grep "ready" > /path/to/node


## imporved solution

kubectl get nodes --no-headers --field-selector=status.conditions[?(@.type=="Ready")].status==True \
| awk '{print $1}' | xargs -I {} kubectl describe node {} \
| grep -v 'NoSchedule' | wc -l > /path/to/node
28 changes: 27 additions & 1 deletion Task16/task.md
Original file line number Diff line number Diff line change
@@ -1 +1,27 @@
Create a pod named kucc8 with a single app container for each of the following images running inside (there may be between 1 and 4 images specified): nginx + redis + memcached.
Create a pod named kucc8 with a single app container for each of the following images running inside (there may be between 1 and 4 images specified): nginx + redis + memcached.


## Below is my answer

kubectl run kucc8 --image=nginx --image=redis --image=memcached --dry-run=client -o yaml > k8s-exam-task16

apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: nginx
name: kucc8
spec:
containers:
- image: nginx
name: nginx
- image: memcached
name: memcached
- image: redis
name: redis
resources: {}
dnsPolicy: ClusterFirst
restartPolicy: Always
status: {}

16 changes: 15 additions & 1 deletion Task17/task.md
Original file line number Diff line number Diff line change
@@ -1 +1,15 @@
Set the node labelled with name:node-01 as unavailable and reschedule all the pods running on it.
Set the node labelled with name:node-01 as unavailable and reschedule all the pods running on it.


## Below is my answer

kubectl get node -l name=node-01

kubectl cordon node-01

kubectl drain node-01 --ignore-daemonsets --delete-emptydir-data


kubectl uncordon node-01


45 changes: 44 additions & 1 deletion Task18/task.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,4 +7,47 @@ Add a port specification named http exposing port 80/tcp of the existing contain



____WORKINPROGESS!!!!
____WORKINPROGESS!!!


## below is my attempt to answer this problem

kubectl edit deployment front-end


kubectl create deployment front-end --image=nginx --port=80 --dry-run=client -o yaml > front-end1.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: null
labels:
app: front-end
name: front-end
spec:
replicas: 1
selector:
matchLabels:
app: front-end
foo: bar
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
app: front-end
spec:
containers:
- name: nginx
image: nginx
ports:
- name: http
containerPort: 80

## Creating the service named front-end-svc

kubectl create service front-end-svc --tcp=80:8080 -l name=bar

kubectl expose deployment front-end --name=front-end-svc --port=80 --target-port=http --selector=foo=bar --type=NodePort


25 changes: 24 additions & 1 deletion Task19/task.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,4 +2,27 @@ Create a Pod as follows:

Name: jenkins
Using image: jenkins
In a new Kubenetes namespace named tools 
In a new Kubenetes namespace named tools

## Below is my answer

kubectl create ns tools

kubectl run jenkins --image=jenkins --namespace=tools --dry-run=client -o yaml > jenkins.yaml

apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: jenkins
name: jenkins
namespace: tools
spec:
containers:
- image: jenkins
name: jenkins
resources: {}
dnsPolicy: ClusterFirst
restartPolicy: Always
status: {}
Loading