From 654879aebb69dd87eaebaa1425487b68c016dfe0 Mon Sep 17 00:00:00 2001 From: Benedikt Rollik Date: Thu, 30 Jan 2025 14:36:56 +0100 Subject: [PATCH 1/3] feat(k8s): add tutorial for multi-az pvc migration --- .../index.mdx | 186 ++++++++++++++++++ 1 file changed, 186 insertions(+) create mode 100644 tutorials/migrate-k8s-persistent-volumes-to-multi-az/index.mdx diff --git a/tutorials/migrate-k8s-persistent-volumes-to-multi-az/index.mdx b/tutorials/migrate-k8s-persistent-volumes-to-multi-az/index.mdx new file mode 100644 index 0000000000..41c6ff8137 --- /dev/null +++ b/tutorials/migrate-k8s-persistent-volumes-to-multi-az/index.mdx @@ -0,0 +1,186 @@ +--- +meta: + title: Migrating persistent volumes in a multi-zone Scaleway Kapsule cluster + description: This tutorial provides information about how to migrate existing Persistent Volumes in a Scaleway Kapsule multi-zone cluster to enhance availability and fault tolerance. +content: + h1: Migrating persistent volumes in a multi-zone Scaleway Kapsul cluster + paragraph: This tutorial provides information about how to migrate existing Persistent Volumes in a Scaleway Kapsule multi-zone cluster to enhance availability and fault tolerance. +tags: kapsule elastic-metal migration persistent-volumes +categories: + - kubernetes +dates: + validation: 2025-01-30 + posted: 2025-01-30 +--- + +Historically, Scaleway Kapsule clusters were single-zone, meaning workloads and their associated storage were confined to a single location. With the introduction of multi-zone support, distributing workloads across multiple zones can enhance availability and fault tolerance. + +This tutorial provides a generalized approach to migrating Persistent Volumes (PVs) from one zone to another in a Scaleway Kapsule multi-zone cluster, applicable to various applications. + + + +- A Scaleway account logged into the [console](https://console.scaleway.com) +- [Owner](/iam/concepts/#owner) status or [IAM permissions](/iam/concepts/#permission) allowing you to perform actions in the intended Organization +- [Created a Kapsule cluster](/kubernetes/how-to/create-cluster/) with multi-zone support enabled +- An existing `StatefulSet` using **Persistent Volumes** in a single-zone cluster. +- [kubectl](/kubernetes/how-to/connect-cluster-kubectl/) installed and configured to interact with your Kubernetes cluster +- [Scaleway CLI](/scaleway-cli/quickstart/) installed and configured +- Familiarity with Kubernetes Persistent Volumes, `StatefulSets`, and Storage Classes. + + + **Backing up your data is crucial before making any changes.** + Ensure you have a backup strategy in place. You can use tools like Velero for Kubernetes backups or manually copy data to another storage solution. Always verify the integrity of your backups before proceeding. + + +## Identify existing Persistent Volumes + +1. Use `kubectl` to interact with your cluster and list the Persistent Volumes in your cluster: + ```sh + kubectl get pv + ``` + +2. Identify the volumes attached to your StatefulSet and note their `PersistentVolumeClaim` (PVC) names and `StorageClass`. + Example output: + ```plaintext + NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS ZONE + pvc-123abc 10Gi RWO Retain Bound default/my-app-pvc scw-bssd fr-par-1 + ``` + To find the `VOLUME_ID`, correlate this with the scw command output: + ``` + scw instance volume list + ``` + +## Create snapshots of your existing Persistent Volumes + +Use the Scaleway CLI to create snapshots of your volumes. + +1. Retrieve the volume ID associated with the Persistent Volume: + ```sh + scw instance volume list + ``` + +2. Create a snapshot for each volume: + ```sh + scw instance snapshot create volume-id= name=my-app-snapshot + ``` + +3. Verify snapshot creation: + ```sh + scw instance snapshot list + ``` + +## Create multi-zone Persistent Volumes + +Once the snapshots are available, create new volumes in different zones: + +```sh +scw instance volume create name=my-app-volume-new size=10GB type=bssd snapshot-id= zone=fr-par-2 +``` + +Repeat this for each zone required. + + + Choose zones based on your distribution strategy. Check Scaleway's [zone availability](/account/reference-content/products-availability/) for optimal placement. + + +## Update Persistent Volume Claims (PVCs) + + + Deleting a PVC can lead to data loss if not managed correctly. Ensure your application is scaled down or data is backed up. + + +Modify your `PersistentVolumeClaims` to reference the newly created volumes. + +1. Delete the existing PVC (PVCs are immutable and cannot be updated directly): + ```sh + kubectl delete pvc my-app-pvc + ``` + +2. Create a new PVC with a multi-zone compatible `StorageClass`: + ```yaml + apiVersion: v1 + kind: PersistentVolumeClaim + metadata: + name: my-app-pvc + spec: + accessModes: + - ReadWriteOnce + storageClassName: "scw-bssd-multi-zone" + resources: + requests: + storage: 10Gi + ``` + +3. Apply the updated PVCs: + ```sh + kubectl apply -f my-app-pvc.yaml + ``` + +## Reconfigure the StatefulSet to use multi-zone volumes + +1. Edit the `StatefulSet` definition to use the newly created Persistent Volume Claims. + Example `StatefulSet` configuration: + + ```yaml + apiVersion: apps/v1 + kind: StatefulSet + metadata: + name: my-app + spec: + volumeClaimTemplates: + - metadata: + name: my-app-pvc + spec: + storageClassName: "scw-bssd-multi-zone" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 10Gi + ``` + +2. Apply the `StatefulSet` changes: + ```sh + kubectl apply -f my-app-statefulset.yaml + ``` + +## Verify migration + +1. Check that the `StatefulSet` pods are running in multiple zones: + ```sh + kubectl get pods -o wide + ``` + +2. Ensure that the new Persistent Volumes are bound and correctly distributed across the zones: + ```sh + kubectl get pv + ``` + +## Considerations for volume expansion + +If you need to **resize the Persistent Volume**, ensure that the `StorageClass` supports volume expansion. + +1. Check if the feature is enabled: + ```sh + kubectl get storageclass scw-bssd-multi-zone -o yaml | grep allowVolumeExpansion + ``` + +2. If `allowVolumeExpansion: true` is present, you can modify your PVC: + ```yaml + spec: + resources: + requests: + storage: 20Gi + ``` + +3. Then apply the change: + ```sh + kubectl apply -f my-app-pvc.yaml + ``` + +## Conclusion + +You have successfully migrated your Persistent Volumes to a multi-zone Kapsule setup. Your `StatefulSet` is now distributed across multiple zones, improving resilience and availability. + +For further optimization, consider implementing [Pod anti-affinity](https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#affinity-and-anti-affinity) rules to ensure an even distribution of workloads across zones. + From 71f1ece2476c07acada3bab39963ce9f308d2f12 Mon Sep 17 00:00:00 2001 From: Benedikt Rollik Date: Thu, 30 Jan 2025 14:48:32 +0100 Subject: [PATCH 2/3] feat(k8s): tuto --- .../index.mdx | 24 ++++++++++++++----- 1 file changed, 18 insertions(+), 6 deletions(-) diff --git a/tutorials/migrate-k8s-persistent-volumes-to-multi-az/index.mdx b/tutorials/migrate-k8s-persistent-volumes-to-multi-az/index.mdx index 41c6ff8137..56637c9e4e 100644 --- a/tutorials/migrate-k8s-persistent-volumes-to-multi-az/index.mdx +++ b/tutorials/migrate-k8s-persistent-volumes-to-multi-az/index.mdx @@ -3,7 +3,7 @@ meta: title: Migrating persistent volumes in a multi-zone Scaleway Kapsule cluster description: This tutorial provides information about how to migrate existing Persistent Volumes in a Scaleway Kapsule multi-zone cluster to enhance availability and fault tolerance. content: - h1: Migrating persistent volumes in a multi-zone Scaleway Kapsul cluster + h1: Migrating persistent volumes in a multi-zone Scaleway Kapsule cluster paragraph: This tutorial provides information about how to migrate existing Persistent Volumes in a Scaleway Kapsule multi-zone cluster to enhance availability and fault tolerance. tags: kapsule elastic-metal migration persistent-volumes categories: @@ -29,7 +29,7 @@ This tutorial provides a generalized approach to migrating Persistent Volumes (P **Backing up your data is crucial before making any changes.** - Ensure you have a backup strategy in place. You can use tools like Velero for Kubernetes backups or manually copy data to another storage solution. Always verify the integrity of your backups before proceeding. + Ensure you have a backup strategy in place. You can use tools like [Velero](/tutorials/k8s-velero-backup/) for Kubernetes backups or manually copy data to another storage solution. Always verify the integrity of your backups before proceeding. ## Identify existing Persistent Volumes @@ -50,6 +50,13 @@ This tutorial provides a generalized approach to migrating Persistent Volumes (P scw instance volume list ``` +3. To find the `VOLUME_ID` associated with a PV, correlate it with the output of the following command: + + ```sh + scw instance volume list + ``` + Match the PV's details with the corresponding volume in the Scaleway Instance list to identify the correct `VOLUME_ID`. + ## Create snapshots of your existing Persistent Volumes Use the Scaleway CLI to create snapshots of your volumes. @@ -91,12 +98,17 @@ Repeat this for each zone required. Modify your `PersistentVolumeClaims` to reference the newly created volumes. -1. Delete the existing PVC (PVCs are immutable and cannot be updated directly): +1. Before deleting the existing PVC, scale down your application to prevent data loss: + ```sh + kubectl scale statefulset my-app --replicas=0 + ``` + +2. Delete the existing PVC (PVCs are immutable and cannot be updated directly): ```sh kubectl delete pvc my-app-pvc ``` -2. Create a new PVC with a multi-zone compatible `StorageClass`: +3. Create a new PVC with a multi-zone compatible `StorageClass`: ```yaml apiVersion: v1 kind: PersistentVolumeClaim @@ -111,7 +123,7 @@ Modify your `PersistentVolumeClaims` to reference the newly created volumes. storage: 10Gi ``` -3. Apply the updated PVCs: +4. Apply the updated PVCs: ```sh kubectl apply -f my-app-pvc.yaml ``` @@ -119,7 +131,7 @@ Modify your `PersistentVolumeClaims` to reference the newly created volumes. ## Reconfigure the StatefulSet to use multi-zone volumes 1. Edit the `StatefulSet` definition to use the newly created Persistent Volume Claims. - Example `StatefulSet` configuration: + Example configuration: ```yaml apiVersion: apps/v1 From d8892d09b42c414ac8f792be5b0255d79e6db8bb Mon Sep 17 00:00:00 2001 From: Benedikt Rollik Date: Thu, 13 Feb 2025 11:54:53 +0100 Subject: [PATCH 3/3] feat(k8s): tuto --- .../index.mdx | 81 ++++++++----------- 1 file changed, 32 insertions(+), 49 deletions(-) diff --git a/tutorials/migrate-k8s-persistent-volumes-to-multi-az/index.mdx b/tutorials/migrate-k8s-persistent-volumes-to-multi-az/index.mdx index 56637c9e4e..6e08e985f5 100644 --- a/tutorials/migrate-k8s-persistent-volumes-to-multi-az/index.mdx +++ b/tutorials/migrate-k8s-persistent-volumes-to-multi-az/index.mdx @@ -13,12 +13,9 @@ dates: posted: 2025-01-30 --- -Historically, Scaleway Kapsule clusters were single-zone, meaning workloads and their associated storage were confined to a single location. With the introduction of multi-zone support, distributing workloads across multiple zones can enhance availability and fault tolerance. - -This tutorial provides a generalized approach to migrating Persistent Volumes (PVs) from one zone to another in a Scaleway Kapsule multi-zone cluster, applicable to various applications. +Historically, Scaleway Kapsule clusters were single-zone, meaning workloads and their associated storage were confined to a single location. With the introduction of multi-zone support, distributing workloads across multiple zones can enhance availability and fault tolerance. This tutorial provides a generalized approach to migrating Persistent Volumes (PVs) from one zone to another in a Scaleway Kapsule multi-zone cluster, applicable to various applications. - - A Scaleway account logged into the [console](https://console.scaleway.com) - [Owner](/iam/concepts/#owner) status or [IAM permissions](/iam/concepts/#permission) allowing you to perform actions in the intended Organization - [Created a Kapsule cluster](/kubernetes/how-to/create-cluster/) with multi-zone support enabled @@ -28,34 +25,28 @@ This tutorial provides a generalized approach to migrating Persistent Volumes (P - Familiarity with Kubernetes Persistent Volumes, `StatefulSets`, and Storage Classes. - **Backing up your data is crucial before making any changes.** - Ensure you have a backup strategy in place. You can use tools like [Velero](/tutorials/k8s-velero-backup/) for Kubernetes backups or manually copy data to another storage solution. Always verify the integrity of your backups before proceeding. +**Backing up your data is crucial before making any changes.** +Ensure you have a backup strategy in place. You can use tools like [Velero](/tutorials/k8s-velero-backup/) for Kubernetes backups or manually copy data to another storage solution. Always verify the integrity of your backups before proceeding. ## Identify existing Persistent Volumes 1. Use `kubectl` to interact with your cluster and list the Persistent Volumes in your cluster: - ```sh - kubectl get pv - ``` - -2. Identify the volumes attached to your StatefulSet and note their `PersistentVolumeClaim` (PVC) names and `StorageClass`. - Example output: - ```plaintext - NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS ZONE - pvc-123abc 10Gi RWO Retain Bound default/my-app-pvc scw-bssd fr-par-1 - ``` - To find the `VOLUME_ID`, correlate this with the scw command output: - ``` - scw instance volume list - ``` - -3. To find the `VOLUME_ID` associated with a PV, correlate it with the output of the following command: +```sh +kubectl get pv +``` +2. Identify the volumes attached to your StatefulSet and note their `PersistentVolumeClaim` (PVC) names and `StorageClass`. To find the `VOLUME_ID` associated with a PV, correlate the PV's details with the output of the following command: +```sh +scw instance volume list +``` +3. Match the PV's details with the corresponding volume in the Scaleway Instance list to identify the correct `VOLUME_ID`. - ```sh - scw instance volume list - ``` - Match the PV's details with the corresponding volume in the Scaleway Instance list to identify the correct `VOLUME_ID`. +**Example output:** +```plaintext +NAMECAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIMSTORAGECLASS ZONE +pvc-123abc 10Gi RWORetain Bounddefault/my-app-pvc scw-bssd fr-par-1 +``` +To find the `VOLUME_ID`, correlate this with the output of the `scw instance volume list` command. ## Create snapshots of your existing Persistent Volumes @@ -65,12 +56,10 @@ Use the Scaleway CLI to create snapshots of your volumes. ```sh scw instance volume list ``` - 2. Create a snapshot for each volume: ```sh scw instance snapshot create volume-id= name=my-app-snapshot ``` - 3. Verify snapshot creation: ```sh scw instance snapshot list @@ -86,14 +75,14 @@ scw instance volume create name=my-app-volume-new size=10GB type=bssd snapshot-i Repeat this for each zone required. - - Choose zones based on your distribution strategy. Check Scaleway's [zone availability](/account/reference-content/products-availability/) for optimal placement. + +Choose zones based on your distribution strategy. Check Scaleway's [zone availability](/account/reference-content/products-availability/) for optimal placement. ## Update Persistent Volume Claims (PVCs) - Deleting a PVC can lead to data loss if not managed correctly. Ensure your application is scaled down or data is backed up. +Deleting a PVC can lead to data loss if not managed correctly. Ensure your application is scaled down or data is backed up. Modify your `PersistentVolumeClaims` to reference the newly created volumes. @@ -102,13 +91,11 @@ Modify your `PersistentVolumeClaims` to reference the newly created volumes. ```sh kubectl scale statefulset my-app --replicas=0 ``` - -2. Delete the existing PVC (PVCs are immutable and cannot be updated directly): +2. Delete the existing PVC: ```sh kubectl delete pvc my-app-pvc ``` - -3. Create a new PVC with a multi-zone compatible `StorageClass`: +3. Create a new PVC with a multi-zone compatible `StorageClass`. Here is an example YAML file: ```yaml apiVersion: v1 kind: PersistentVolumeClaim @@ -122,7 +109,6 @@ Modify your `PersistentVolumeClaims` to reference the newly created volumes. requests: storage: 10Gi ``` - 4. Apply the updated PVCs: ```sh kubectl apply -f my-app-pvc.yaml @@ -130,9 +116,7 @@ Modify your `PersistentVolumeClaims` to reference the newly created volumes. ## Reconfigure the StatefulSet to use multi-zone volumes -1. Edit the `StatefulSet` definition to use the newly created Persistent Volume Claims. - Example configuration: - +1. Edit the `StatefulSet` definition to use the newly created Persistent Volume Claims. Here is an example configuration: ```yaml apiVersion: apps/v1 kind: StatefulSet @@ -150,7 +134,6 @@ Modify your `PersistentVolumeClaims` to reference the newly created volumes. requests: storage: 10Gi ``` - 2. Apply the `StatefulSet` changes: ```sh kubectl apply -f my-app-statefulset.yaml @@ -162,7 +145,6 @@ Modify your `PersistentVolumeClaims` to reference the newly created volumes. ```sh kubectl get pods -o wide ``` - 2. Ensure that the new Persistent Volumes are bound and correctly distributed across the zones: ```sh kubectl get pv @@ -170,29 +152,30 @@ Modify your `PersistentVolumeClaims` to reference the newly created volumes. ## Considerations for volume expansion -If you need to **resize the Persistent Volume**, ensure that the `StorageClass` supports volume expansion. +If you need to **resize the Persistent Volume**, ensure that the `StorageClass` supports volume expansion. 1. Check if the feature is enabled: ```sh kubectl get storageclass scw-bssd-multi-zone -o yaml | grep allowVolumeExpansion ``` - 2. If `allowVolumeExpansion: true` is present, you can modify your PVC: ```yaml spec: - resources: + resources: requests: - storage: 20Gi + storage: 20Gi ``` - 3. Then apply the change: ```sh kubectl apply -f my-app-pvc.yaml ``` -## Conclusion +## Troubleshooting -You have successfully migrated your Persistent Volumes to a multi-zone Kapsule setup. Your `StatefulSet` is now distributed across multiple zones, improving resilience and availability. +- **Persistent Volume not bound:** Ensure that the `StorageClass` and zone settings are correct. +- **Application not scaling:** Check the StatefulSet configuration and PVC settings. +- **Data integrity issues:** Verify the integrity of your backups before proceeding with any changes. -For further optimization, consider implementing [Pod anti-affinity](https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#affinity-and-anti-affinity) rules to ensure an even distribution of workloads across zones. +## Conclusion +You have successfully migrated your Persistent Volumes to a multi-zone Kapsule setup. Your `StatefulSet` is now distributed across multiple zones, improving resilience and availability. For further optimization, consider implementing [Pod anti-affinity](https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#affinity-and-anti-affinity) rules to ensure an even distribution of workloads across zones. \ No newline at end of file