diff --git a/Task1/solution/pvc.yml b/Task1/solution/pvc.yml index 9e09ecc..47f3370 100644 --- a/Task1/solution/pvc.yml +++ b/Task1/solution/pvc.yml @@ -8,4 +8,55 @@ spec: - ReadWriteMany resources: requests: - storage: 10Gi \ No newline at end of file + storage: 10Gi + +## add a pods as below + +apiVersion: v1 +kind: Pod +metadata: + name: nginx-pod +spec: + containers: + - name: nginx + image: nginx + ports: + - containerPort: 80 + volumes: + - name: my-pvc + persistentVolumeClaim: + claimName: my-pvc + + + + + apiVersion: apps/v1 +kind: Deployment +metadata: + name: my-deployment + namespace: web-tier + labels: + app: my-deployment +spec: + replicas: 3 + selector: + matchLabels: + app: my-deployment + strategy: {} + template: + metadata: + labels: + app: my-deployment + spec: + containers: + - name: nginx + image: nginx + ports: + - containerPort: 8080 + resources: {} + + + +## what wrong with this command below to create a service account pod +kubectl create serviceaccount my-serviceaccount --namespace=my-namespace --dry-run=client -o yaml | kubectl apply -f - + diff --git a/Task1/task.md b/Task1/task.md index 662a0c5..f7d8522 100644 --- a/Task1/task.md +++ b/Task1/task.md @@ -3,3 +3,35 @@ • Assign the PVC to the pod named nginx-pod with image nginx and mount to path /usr/share/html • Ensure the pod claims the volume as ReadWriteMany access mode • Use kubectl patch or kubectl edit to update the capacity of the PVC as 70Gi to record the change. + + +## Below is my answer + +apiVersion: v1 +kind: PersistentVolumeClaim +metadata: + name: my-pvc +spec: + accessModes: + - ReadWriteMany + resources: + requests: + storage: 10Gi + storageClassName: k8s-csi-plugin + +--- + +apiVersion: v1 +kind: Pod +metadata: + name: nginx-pod +spec: + containers: + - name: nginx-pod + image: nginx + volumeMounts: + - mountPath: "/usr/share/html" + name: my-pvc + volumes: + - name: my-pvc + persistentVolumeClaim: diff --git a/Task10/solution/pv.yaml b/Task10/solution/pv.yaml index 35bcebe..5e344e6 100644 --- a/Task10/solution/pv.yaml +++ b/Task10/solution/pv.yaml @@ -3,6 +3,21 @@ kind: PersistentVolume metadata: name: my-vol spec: + capacity: + storage: 2Gi + accessModes: + - ReadWriteOnce + hostPath: + path: "/path/to/file" + + +--- +apiVersion: v1 +kind: PersistentVolume +metadata: + name: my-vol +spec: + storageClassName: manual capacity: storage: 2Gi accessModes: diff --git a/Task10/task.md b/Task10/task.md index 2542e66..56c38d0 100644 --- a/Task10/task.md +++ b/Task10/task.md @@ -1 +1,37 @@ -Create a persistent Volume with name my-vol, of capactiy 2Gi and access mode ReadWriteOnce. The type of volume is hostpath and its location is /path/to/file \ No newline at end of file +Create a persistent Volume with name my-vol, of capactiy 2Gi and access mode ReadWriteOnce. The type of volume is hostpath and its location is /path/to/file + + +## This is my answer + +apiVersion: v1 +kind: PersistentVolume +metadata: + name: my-vol +spec: + storageClassName: manual + capacity: + storage: 2Gi + accessModes: + - ReadWriteOnce + hostPath: + path: "/path/to/file" + +## Chatgpt answer and why I came close to the correct answer + +apiVersion: v1 +kind: PersistentVolume +metadata: + name: my-vol +spec: + storageClassName: manual + capacity: + storage: 2Gi + accessModes: + - ReadWriteOnce + persistentVolumeReclaimPolicy: Retain # optional, but good to include + persistentVolumeSource: + hostPath: + path: "/path/to/file" + + +## your answer is almost correct, but you're missing the persistentVolumeSource definition to properly define the hostPath. Here’s the corrected version of the YAML manifest: \ No newline at end of file diff --git a/Task11/task.md b/Task11/task.md index e9433d3..20d48c2 100644 --- a/Task11/task.md +++ b/Task11/task.md @@ -1,3 +1,21 @@ Upgrade master control plane components from version 1.20.0 to only version 1.20.1. • Drain the master before start the upgrade and uncordn once the upgrade is completed -• Update the kubelet and kubeadm as well \ No newline at end of file +• Update the kubelet and kubeadm as well + + +## Below is my answer to this questions + +Kubectl cordon control plane + +kubectl drain control plane --ignore-daemonsets --delete-emptydir-data + +## the command to upgrade to control plane node goes here + +sudo kubeadm upgrade apply v1.20.1 + +sudo apt-get install -y kubelet=1.20.1-00 kubeadm=1.20.1-00 +sudo apt-mark hold kubelet kubeadm +sudo systemctl daemon-reload +sudo systemctl restart kubelet + +Kubectl uncordon control plane diff --git a/Task13/task.md b/Task13/task.md index d70e56a..e35c718 100644 --- a/Task13/task.md +++ b/Task13/task.md @@ -1 +1,24 @@ -The node k8s-node is not in ready state. Ssh to the node and troubleshoot the issue, make the changes permanent to avoid the problem in future. \ No newline at end of file +The node k8s-node is not in ready state. Ssh to the node and troubleshoot the issue, make the changes permanent to avoid the problem in future. + +## below is my answer or steps that I would take to slove this problem + +kubectl get node + +kubectl describe node k8s-node + +ssh k8s-node + +## After logging into k8s-node, look at kubelet agent to see if it's active + +## Also look at other processes that are need to run k8s, see if those processes are active too + +## If any processes that are need to run k8s are not active you would need to active them + +## this is how I would slove this problem + +systemctl status kubelet + +sudo systemctl start kubelet + +journalctl -u kubelet -f + diff --git a/Task14/task.md b/Task14/task.md index 17a9ccb..00ccd1b 100644 --- a/Task14/task.md +++ b/Task14/task.md @@ -1 +1,13 @@ Check to see how many nodes are ready (not including nodes tainted NoSchedule) and write the number to /path/to/node + + +## Below is my answer to this problem + +kubectl get nodes | grep "ready" > /path/to/node + + +## imporved solution + +kubectl get nodes --no-headers --field-selector=status.conditions[?(@.type=="Ready")].status==True \ + | awk '{print $1}' | xargs -I {} kubectl describe node {} \ + | grep -v 'NoSchedule' | wc -l > /path/to/node diff --git a/Task16/task.md b/Task16/task.md index f9c3ec1..5f135ab 100644 --- a/Task16/task.md +++ b/Task16/task.md @@ -1 +1,27 @@ -Create a pod named kucc8 with a single app container for each of the following images running inside (there may be between 1 and 4 images specified): nginx + redis + memcached. \ No newline at end of file +Create a pod named kucc8 with a single app container for each of the following images running inside (there may be between 1 and 4 images specified): nginx + redis + memcached. + + +## Below is my answer + +kubectl run kucc8 --image=nginx --image=redis --image=memcached --dry-run=client -o yaml > k8s-exam-task16 + +apiVersion: v1 +kind: Pod +metadata: + creationTimestamp: null + labels: + run: nginx + name: kucc8 +spec: + containers: + - image: nginx + name: nginx + - image: memcached + name: memcached + - image: redis + name: redis + resources: {} + dnsPolicy: ClusterFirst + restartPolicy: Always +status: {} + diff --git a/Task17/task.md b/Task17/task.md index 302e0fa..6c93792 100644 --- a/Task17/task.md +++ b/Task17/task.md @@ -1 +1,15 @@ -Set the node labelled with name:node-01 as unavailable and reschedule all the pods running on it. \ No newline at end of file +Set the node labelled with name:node-01 as unavailable and reschedule all the pods running on it. + + +## Below is my answer + +kubectl get node -l name=node-01 + +kubectl cordon node-01 + +kubectl drain node-01 --ignore-daemonsets --delete-emptydir-data + + +kubectl uncordon node-01 + + diff --git a/Task18/task.md b/Task18/task.md index f25744f..17744c3 100644 --- a/Task18/task.md +++ b/Task18/task.md @@ -7,4 +7,47 @@ Add a port specification named http exposing port 80/tcp of the existing contain -____WORKINPROGESS!!!! \ No newline at end of file +____WORKINPROGESS!!! + + +## below is my attempt to answer this problem + +kubectl edit deployment front-end + + +kubectl create deployment front-end --image=nginx --port=80 --dry-run=client -o yaml > front-end1.yaml + +apiVersion: apps/v1 +kind: Deployment +metadata: + creationTimestamp: null + labels: + app: front-end + name: front-end +spec: + replicas: 1 + selector: + matchLabels: + app: front-end + foo: bar + strategy: {} + template: + metadata: + creationTimestamp: null + labels: + app: front-end + spec: + containers: + - name: nginx + image: nginx + ports: + - name: http + containerPort: 80 + +## Creating the service named front-end-svc + +kubectl create service front-end-svc --tcp=80:8080 -l name=bar + +kubectl expose deployment front-end --name=front-end-svc --port=80 --target-port=http --selector=foo=bar --type=NodePort + + diff --git a/Task19/task.md b/Task19/task.md index ca24f70..bcb5415 100644 --- a/Task19/task.md +++ b/Task19/task.md @@ -2,4 +2,27 @@ Create a Pod as follows: Name: jenkins Using image: jenkins -In a new Kubenetes namespace named tools  \ No newline at end of file +In a new Kubenetes namespace named tools + +## Below is my answer + +kubectl create ns tools + +kubectl run jenkins --image=jenkins --namespace=tools --dry-run=client -o yaml > jenkins.yaml + +apiVersion: v1 +kind: Pod +metadata: + creationTimestamp: null + labels: + run: jenkins + name: jenkins + namespace: tools +spec: + containers: + - image: jenkins + name: jenkins + resources: {} + dnsPolicy: ClusterFirst + restartPolicy: Always +status: {} \ No newline at end of file diff --git a/Task2/task.md b/Task2/task.md index 12bd095..fccee17 100644 --- a/Task2/task.md +++ b/Task2/task.md @@ -7,4 +7,69 @@ • Replicaset • Pods -Ensure only the newly created service account can use the role and it is effective with the name space my-ns. \ No newline at end of file +Ensure only the newly created service account can use the role and it is effective with the name space my-ns. + + +## Below is my answer + +kubectl create ns my-ns + +--- + +apiVersion: v1 +kind: ServiceAccount +metadata: + creationTimestamp: null + name: my-sa + namespace: my-ns + + +--- + +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRole +metadata: + creationTimestamp: null + name: new-cluster-role +rules: +- apiGroups: + - "" + resources: + - pods + verbs: + - create + - list +- apiGroups: + - apps + resources: + - daemonsets + - deployments + - replicasets + verbs: + - create + - list + +kubectl create clusterrole new-cluster-role --verb=create,list --resource=daemonsets,deployments,replicaset,pods -n my-ns --dry-run=client -o yaml > new-cluster-role.yml + +kubectl create clusterrolebinding new-cluster-role-binding --clusterrole=new-cluster-role --serviceaccount=default:my-sa -n my-ns --dry-run=client -o yaml > clusterrolebinding.yml + +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRoleBinding +metadata: + creationTimestamp: null + name: new-cluster-role-binding +roleRef: + apiGroup: rbac.authorization.k8s.io + kind: ClusterRole + name: new-cluster-role +subjects: +- kind: ServiceAccount + name: my-sa + namespace: default + + + +# Apply the ClusterRole and RoleBinding +kubectl apply -f new-cluster-role.yml +kubectl apply -f bind-my-sa-to-new-cluster-role.yml + diff --git a/Task20/task.md b/Task20/task.md index ea50996..2ad089d 100644 --- a/Task20/task.md +++ b/Task20/task.md @@ -2,4 +2,29 @@ Create a Static pod Name: consul Using image: consul -In a new Kubenetes namespace named tools  \ No newline at end of file +In a new Kubenetes namespace named tools + + +## Below is my answer + +kubectl create ns tools + +kubectl run consul --image=consul -n tools --dry-run=client -o yaml > static-pod.yaml + +apiVersion: v1 +kind: Pod +metadata: + creationTimestamp: null + labels: + run: consul + name: consul + namespace: tools +spec: + containers: + - image: consul + name: consul + resources: {} + dnsPolicy: ClusterFirst + restartPolicy: Always +status: {} + diff --git a/Task21/task.md b/Task21/task.md index 8280470..ce51881 100644 --- a/Task21/task.md +++ b/Task21/task.md @@ -2,9 +2,10 @@ A Kubernetes worker node, labelled with name "node-01" is in state NotReady . ``` $ kubectl get nodes +kubectl describe node-01 $ ssh node-01 $ sudo -i -$ systemctl status kubelet -$ systemctl start kubelet -$ systemctl enable kubelet +$ systemctl status kubelet ## checking the kubelet agent on the worker node +$ systemctl start kubelet ## starting the kubelet on the worker node +$ systemctl enable kubelet ``` \ No newline at end of file diff --git a/Task25/task.md b/Task25/task.md index b96a7b7..c7b0afb 100644 --- a/Task25/task.md +++ b/Task25/task.md @@ -1 +1,33 @@ -Creae a persistent volume with name my-vol of capacity 10Gi and access mode ReadWriteOnce. The type of volume is hostPath and its location is /test/path \ No newline at end of file +Creae a persistent volume with name my-vol of capacity 10Gi and access mode ReadWriteOnce. The type of volume is hostPath and its location is /test/path + + +## below is my answer + +apiVersion: v1 +kind: PersistentVolume +metadata: + name: my-vol +spec: + capacity: + storage: 10Gi + accessModes: + - ReadWriteOnce + hostPath: + path: /test/path + +--- + +apiVersion: v1 +kind: PersistentVolume +metadata: + name: my-vol +spec: + capacity: + storage: 10Gi + accessModes: + - ReadWriteOnce + hostPath: + path: /test/path + + + \ No newline at end of file diff --git a/Task3/task.md b/Task3/task.md index d729fe4..f654b5a 100644 --- a/Task3/task.md +++ b/Task3/task.md @@ -1 +1,5 @@ -From the pod label name=cpu-burner, find pods running high CPU workloads and Write the name of the pod consuming most CPU to the file /tmp/cpu.txt. \ No newline at end of file +From the pod label name=cpu-burner, find pods running high CPU workloads and Write the name of the pod consuming most CPU to the file /tmp/cpu.txt. + +## Below is my answer + +kubectl top pod -l name=cpu-burner --no-headers | sort -k 2 -nr | head -n 1 | awk '{print $1}' > /tmp/cpu.txt diff --git a/Task4/task.md b/Task4/task.md index 52816c5..05f4109 100644 --- a/Task4/task.md +++ b/Task4/task.md @@ -1 +1,15 @@ -Schedule a pod as follows Name :- nginx01 image :- nginx Node Selector :- name=node. \ No newline at end of file +Schedule a pod as follows Name :- nginx01 image :- nginx Node Selector :- name=node. + + +## below is my answer + +apiVersion: v1 +kind: Pod +metadata: + name: nginx01 +spec: + containers: + - name: nginx + image: nginx + nodeSelector: + name: node diff --git a/Task5/task.md b/Task5/task.md index 2f0021e..e9bc2b6 100644 --- a/Task5/task.md +++ b/Task5/task.md @@ -2,4 +2,14 @@ Create a new nginx Ingress resource as follows: o Name: nginx-ingress o Namespace: ingress-ns o Exposing service me.html on path /me.html using service port 8080 -o Exposing service test on path /test using service port 8080 \ No newline at end of file +o Exposing service test on path /test using service port 8080 + + + +kubectl create ingress nginx-ingress --namespace=ingress-ns --rule="foo.com/bar=svc1:8080,tls=my-cert" + + + kubectl create ingress nginx-ingress --class=default \ + --rule="/me.html=svc:8080" \ + --rule="/test=svc:8080" \ + -o yaml > ingress.yaml \ No newline at end of file diff --git a/Task6/task.md b/Task6/task.md index 483addc..1b502fd 100644 --- a/Task6/task.md +++ b/Task6/task.md @@ -9,4 +9,51 @@ NB: There will be a default deny all from any namespace netpol will be already p -____WORKINPROGESS!!!! \ No newline at end of file +____WORKINPROGESS!!! + +## This is me trying to get it right with out using chatgpt + +apiVersion: networking.k8s.io/v1 +kind: NetworkPolicy +metadata: + name: k8s-netpol + namespace: namespace-netpol +spec: + policyTypes: + - Ingress + - Egress + ports: + - protocol: TCP + port: 9200 + + + +## This is the correct code below + +apiVersion: networking.k8s.io/v1 +kind: NetworkPolicy +metadata: + name: k8s-netpol + namespace: namespace-netpol +spec: + podSelector: {} + policyTypes: + - Ingress + - Egress + ingress: + - from: + - namespaceSelector: + matchLabels: + name: internal + ports: + - protocol: TCP + port: 9200 + egress: + - to: + - namespaceSelector: + matchLabels: + name: internal + ports: + - protocol: TCP + port: 9200 + \ No newline at end of file diff --git a/Task7/task.md b/Task7/task.md index 4c23c7f..133a42f 100644 --- a/Task7/task.md +++ b/Task7/task.md @@ -6,6 +6,19 @@ Do not alter the application container and verify the logs are written properly looging-pod primary container logger does not write to any volumes, you need to create a volume from empty dir and mount it appropriately. +## Hard one, would need to come back to complete +____WORKINPROGESS!!! -____WORKINPROGESS!!!! \ No newline at end of file + +kubectl run busybox --image=busybox + + +apiVersion: apps/v1 +kind: pod +metadata: + name: logging-pod +spec: + containers: + - name: logging-pod + image: busybox \ No newline at end of file diff --git a/Task8/task.md b/Task8/task.md index 3a66b4d..6ba22da 100644 --- a/Task8/task.md +++ b/Task8/task.md @@ -1 +1,9 @@ -Monitor the logs of pod loggy and extract log lines issue-not-found. Write the output to /tmp/pod.txt. \ No newline at end of file +Monitor the logs of pod loggy and extract log lines issue-not-found. Write the output to /tmp/pod.txt. + +kubectl get logs ## the correct command is kubectl logs +Kubectl get pods loggy +kubectl exec -it loggy +cp var/logs > /tmp/pod.txt. + +## below this the correct command +kubectl logs loggy | grep "issue-not-found" > /tmp/pod.txt diff --git a/Task9-1/task.md b/Task9-1/task.md index 94ad1c2..5c69856 100644 --- a/Task9-1/task.md +++ b/Task9-1/task.md @@ -4,4 +4,19 @@ Name: nginx Using container nginx with version 1.11-alpine The deployment should contain 3 replicas Next, deploy the app with new version 1.13-alpine by performing a rolling update and record that update. -Finally, rollback that update to the previous version 1.11-alpine  \ No newline at end of file +Finally, rollback that update to the previous version 1.11-alpine  + + +## This is my answer + +kubectl create deployment my-dep --image=nginx:1.11-alpine --replicas=3 + +## kubectl create deployment my-dep --image=nginx:1.13-alpine --replicas=3 + +kubectl set image deployment/my-dep nginx=nginx:1.13-alpine --record + +kubectl rollout status deployment/nginx + +kubectl rollout undo deployment/nginx + + diff --git a/Task9/task.md b/Task9/task.md index 0d2ed2a..02caaab 100644 --- a/Task9/task.md +++ b/Task9/task.md @@ -1 +1,44 @@ -Scale the deployment learning to 3 pods. \ No newline at end of file +Scale the deployment learning to 3 pods + +apiVersion: apps/v1 +kind: Deployment +metadata: + creationTimestamp: "2024-10-17T21:53:19Z" + generation: 1 + labels: + app: my-dep + name: my-dep + namespace: kube-system + resourceVersion: "95171" + uid: 0b1637d3-897c-4b68-ab6c-e60360dae549 +spec: + progressDeadlineSeconds: 600 + replicas: 3 + revisionHistoryLimit: 10 + selector: + matchLabels: + app: my-dep + strategy: + rollingUpdate: + maxSurge: 25% + maxUnavailable: 25% + type: RollingUpdate + template: + metadata: + creationTimestamp: null + labels: + app: my-dep + spec: + containers: + - image: busybox + imagePullPolicy: Always + name: busybox + resources: {} + terminationMessagePath: /dev/termination-log + terminationMessagePolicy: File + dnsPolicy: ClusterFirst + restartPolicy: Always + schedulerName: default-scheduler + securityContext: {} + terminationGracePeriodSeconds: 30 +status: {} \ No newline at end of file