Skip to content

Commit d7884a0

Browse files
authored
N4A optional exercises (#60)
* move AKS2 to optional * add dashboard screenshot * lab4 - made ASK2 cafe & redis optional * lab4 - update redis docs * lab3 - rename dashboard test file * lab5 - optional exercises * fixed typos
1 parent 0cb4e88 commit d7884a0

File tree

7 files changed

+368
-266
lines changed

7 files changed

+368
-266
lines changed
54.2 KB
Loading

labs/lab3/readme.md

Lines changed: 262 additions & 200 deletions
Large diffs are not rendered by default.

labs/lab4/readme.md

Lines changed: 54 additions & 21 deletions
Original file line numberDiff line numberDiff line change
@@ -1,14 +1,14 @@
1-
# Cafe Demo / Redis Deployment
1+
# Cafe Demo Deployment
22

33
## Introduction
44

5-
In this lab, you will deploy the Nginx Cafe Demo, and Redis In Memory cache applications to your AKS Clusters. You will configure Nginx Ingress to expose these applications external to the Clusters. You will use the Nginx Plus Dashboard to watch the Ingress Resources.
5+
In this lab, you will deploy the Nginx Cafe Demo app to your AKS Cluster. You will configure Nginx Ingress to expose this applications external to the Cluster. You will use the Nginx Plus Dashboard to watch the Kubernetes and Ingress Resources.
66

77
<br/>
88

9-
Nginx Ingress | Cafe | Redis
10-
:--------------:|:--------------:|:--------------:
11-
![NIC](media/nginx-ingress-icon.png) |![Cafe](media/cafe-icon.png) |![Redis](media/redis-icon.png)
9+
Nginx Ingress | Cafe
10+
:--------------:|:--------------:
11+
![NIC](media/nginx-ingress-icon.png) |![Cafe](media/cafe-icon.png)
1212

1313
<br/>
1414

@@ -17,16 +17,17 @@ Nginx Ingress | Cafe | Redis
1717
By the end of the lab you will be able to:
1818

1919
- Deploy the Cafe Demo application
20-
- Deploy the Redis In Memory Cache
2120
- Expose the Cafe Demo app with NodePort
22-
- Expose the Redis Cache with NodePort
2321
- Monitor with Nginx Plus Ingress dashboard
22+
- Optional: Deploy the Redis application
23+
- Optional: Expose the Redis Cache with NodePort
2424

2525
## Pre-Requisites
2626

27-
- You must have both AKS Clusters up and running
28-
- You must have both Nginx Ingress Controllers running
29-
- You must have both the NIC Dashboards available
27+
- You must have your AKS Cluster up and running
28+
- You must have your Nginx Ingress Controller running
29+
- You must have your NIC Dashboard available
30+
- Optional: You must have your Second AKS cluster, Nginx Ingress, and Dashboard running
3031
- Familiarity with basic Linux commands and commandline tools
3132
- Familiarity with basic Kubernetes concepts and commands
3233
- Familiarity with Kubernetes NodePort
@@ -36,7 +37,7 @@ By the end of the lab you will be able to:
3637

3738
<br/>
3839

39-
## Deploy the Nginx Cafe Demo app
40+
## Deploy the Nginx Cafe Demo app in AKS1 Cluster
4041

4142
![Cafe App](media/cafe-icon.png)
4243

@@ -46,15 +47,15 @@ In this section, you will deploy the "Cafe Nginx" Ingress Demo, which represents
4647
- Matching coffee and tea services
4748
- Cafe VirtualServer
4849

49-
The Cafe application that you will deploy looks like the following diagram below. *BOTH* AKS clusters will have the Coffee and Tea pods and services, with NGINX Ingress routing the traffic for /coffee and /tea routes, using the `cafe.example.com` Hostname. There is also a third hidden service, more on that later!
50+
The Cafe application that you will deploy looks like the following diagram below. The AKS cluster will have the Coffee and Tea pods and services, with NGINX Ingress routing the traffic for /coffee and /tea routes, using the `cafe.example.com` Hostname. There is also a third hidden service, more on that later!
5051

5152
![Lab4 diagram](media/lab4_diagram.png)
5253

5354
1. Inspect the `lab4/cafe.yaml` manifest. You will see we are deploying 3 replicas of each the coffee and tea Pods, and create a matching Service for each.
5455

55-
2. Inspect the `lab4/cafe-vs.yaml` manifest. This is the Nginx Ingress VirtualServer CRD (Custom Resource Definition) used by Nginx Ingress to expose these apps, using the `cafe.example.com` Hostname. You will also see that active healthchecks are enabled, and the /coffee and /tea routes are being used. (NOTE: The VirtualServer CRD from Nginx is an `upgrade` to the standard Kubernetes Ingress object).
56+
2. Inspect the `lab4/cafe-vs.yaml` manifest. This is the Nginx Ingress VirtualServer CRD (Custom Resource Definition) used by Nginx Ingress to expose these apps, using the `cafe.example.com` Hostname. You will also see that active healthchecks are enabled, and the /coffee and /tea routes are being used. (NOTE: The VirtualServer CRD from Nginx unlocks all the Plus features of Nginx, and is an `upgrade` to the standard Kubernetes Ingress object).
5657

57-
3. Deploy the Cafe application by applying these two manifests in first cluster:
58+
3. Deploy the Cafe application by applying these two manifests in the first cluster:
5859

5960
> Make sure your Terminal is the `nginx-azure-workshops/labs` directory for all commands during this Workshop.
6061
@@ -140,15 +141,19 @@ The Cafe application that you will deploy looks like the following diagram below
140141

141142
>**NOTE:** The `STATE` should be `Valid`. If it is not, then there is an issue with your yaml manifest file (cafe-vs.yaml). You could also use `kubectl describe vs cafe-vs` to get more information about the VirtualServer you just created.
142143

143-
7. Check your Nginx Plus Ingress Controller Dashboard for first cluster(`n4a-aks1`), at http://dashboard.example.com:9001/dashboard.html. You should now see `cafe.example.com` in the **HTTP Zones** tab, and 2 each of the coffee and tea Pods in the **HTTP Upstreams** tab. Nginx is health checking the Pods, so they should show a Green status.
144+
7. Check your Nginx Plus Ingress Controller Dashboard for first cluster(`n4a-aks1`), at http://dashboard.example.com:9001/dashboard.html. You should now see `cafe.example.com` in the **HTTP Zones** tab, and 2 each of the coffee and tea Pods in the **HTTP Upstreams** tab. Nginx is health checking the Pods, so they should show a Green status, and the successfull Health Checks counter increasing.
144145

145146
![Cafe Zone](media/lab4_http-zones.png)
146147

147148
![Cafe Upstreams](media/lab4_cafe-upstreams-2.png)
148149

149-
>**NOTE:** You should see two Coffee/Tea pods in Cluster 1.
150+
>**NOTE:** You should see two each Coffee/Tea pods in Cluster AKS1.
151+
152+
<br/>
150153

151-
## Deploy the Nginx Cafe Demo app in the 2nd cluster
154+
## Optional: Deploy the Nginx Cafe Demo app in the 2nd cluster
155+
156+
If you have completed the Optional deployment of a Second AKS Cluster (n4a-aks2), running with the Nginx Ingress Controller and the Dashboard, you can use the following steps to deploy the Nginx Cafe Demo app to your Second cluster.
152157

153158
1. Repeat the previous section to deploy the Cafe Demo app in your second cluster (`n4a-aks2`), don't forget to change your Kubectl Context using below command.
154159
@@ -163,20 +168,20 @@ The Cafe application that you will deploy looks like the following diagram below
163168
164169
2. Use the same /lab4 `cafe` and `cafe-vs` manifests.
165170
166-
>*However - do not Scale down the coffee and tea replicas, leave three of each pod running in AKS2.*
167-
168171
```bash
169172
kubectl apply -f lab4/cafe.yaml
170173
kubectl apply -f lab4/cafe-vs.yaml
171174
```
172175
176+
>*However - do not Scale down the coffee and tea replicas, leave three of each pod running in AKS2.*
177+
173178
3. Check your Second Nginx Plus Ingress Controller Dashboard, at http://dashboard.example.com:9002/dashboard.html. You should find the same HTTP Zones, and 3 each of the coffee and tea pods for HTTP Upstreams.
174179
175180
![Cafe Upstreams](media/lab4_cafe-upstreams-3.png)
176181
177182
<br/>
178183
179-
## Deploy Redis In Memory Caching in AKS Cluster 2 (n4a-aks2)
184+
## Optional: Deploy Redis In Memory Caching in Cluster AKS2 (n4a-aks2)
180185
181186
Azure | Redis
182187
:--------------:|:--------------:
@@ -302,6 +307,34 @@ In this exercise, you will deploy Redis in your second cluster (`n4a-aks2`), and
302307
303308
```
304309
310+
1. Inspect the Nginx TransportServer manifests for Redis Leader and Redis Follower, `redis-leader-ts.yaml` and `redis-follower-ts.yaml` respectively. Take note you are creating a Layer4, TCP Transport Server, listening on the Redis standard port 6379. You are limiting the active connections to 100, and using the `Least Time Last Byte` Nginx Plus load balancing algorithm - telling Nginx to pick the *fastest* Redis pod based on Response Time for new TCP connections!
311+
312+
```nginx
313+
# NIC Plus TransportServer file
314+
# Add ports 6379 for Redis Leader
315+
# Chris Akker, Jan 2024
316+
#
317+
apiVersion: k8s.nginx.org/v1alpha1
318+
kind: TransportServer
319+
metadata:
320+
name: redis-leader-ts
321+
spec:
322+
listener:
323+
name: redis-leader-listener
324+
protocol: TCP
325+
upstreams:
326+
- name: redis-upstream
327+
service: redis-leader
328+
port: 6379
329+
maxFails: 3
330+
maxConns: 100
331+
failTimeout: 10s
332+
loadBalancingMethod: least_time last_byte # use fastest pod
333+
action:
334+
pass: redis-upstream
335+
336+
```
337+
305338
1. Create the Nginx Ingress Transport Servers, for Redis Leader and Follow traffic, using the Transport Server CRD:
306339
307340
```bash
@@ -412,7 +445,7 @@ Service Port | External NodePort | Name
412445
6380 | 32380 | redis follower
413446
9000 | 32090 | dashboard
414447
415-
You will use these new Redis NodePorts for your Nginx for Azure upstreams in the next Lab.
448+
You will use these new Redis NodePorts for your Nginx for Azure upstreams in an Optional Lab Exercise.
416449
417450
<br/>
418451

labs/lab4/redis-follower-ts.yaml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -17,6 +17,6 @@ spec:
1717
maxFails: 3
1818
maxConns: 100
1919
failTimeout: 10s
20-
loadBalancingMethod: least_time last_byte
20+
loadBalancingMethod: least_time last_byte # use fastest pod
2121
action:
2222
pass: redis-upstream

labs/lab4/redis-leader-ts.yaml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -17,6 +17,6 @@ spec:
1717
maxFails: 3
1818
maxConns: 100
1919
failTimeout: 10s
20-
loadBalancingMethod: least_time last_byte
20+
loadBalancingMethod: least_time last_byte # use fastest pod
2121
action:
2222
pass: redis-upstream

0 commit comments

Comments
 (0)