You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardexpand all lines: src/routes/blog/posts/changing-service-mesh/+page.md
+7-8
Original file line number
Diff line number
Diff line change
@@ -7,7 +7,6 @@ author: Frode Sundby
7
7
tags: [istio, linkerd, LoadBalancing]
8
8
---
9
9
10
-
11
10
## Why change?
12
11
With an ambition of making our environments as secure as possible, we jumped on the service-mesh bandwagon in 2018 with Istio 0.7 and have stuck with it since.
13
12
@@ -23,7 +22,7 @@ We looked to the grand ol' Internet for alternatives and fixed our gaze on the r
23
22
Having honed in on our preferred candidate, we decided to take it for a quick spin in a cluster and found our suspicions to be accurate.
24
23
25
24
Rarely has a meme depicted a feeling more strongly
However - if we'd started shipping traffic to the new components at this stage, things would start breaking as there were no ingresses in the cluster - only VirtualServices.
92
91
To avoid downtime, we created an interim ingress that forwarded all traffic to the Istio IngressGateway:
With this ingress in place, we could reach all the existing VirtualServices exposed by the Istio Ingressgateway via the new Loadbalancers and Nginx.
109
108
And we could point our DNS records to the new rig without anyone noticing a thing.
110
109
@@ -127,7 +126,7 @@ One thing we didn't take into concideration (but should have), was that some app
127
126
When an ingress was created for a shared hostname, Nginx would stop forwarding requests for these hosts to Istio Ingressgateway, resulting in non-migrated applications not getting any traffic.
128
127
Realizing this, we started migrating applications on the same hostname simultaneously too.
And within a couple of hours, all workloads were migrated and we had ourselves a brand spanking new service-mesh in production.
132
131
And then they all lived happily ever after...
133
132
@@ -137,7 +136,7 @@ Except that we had to clean up Istio's mess.
137
136
What was left after the party was a fully operational Istio control plane, a whole bunch of Istio CRD's and a completely unused set of LoadBalancers. In addition we had to clean up everything related to Istio in a whole lot of pipelines and components
0 commit comments