A kubernetes cluster is required to run crossplane. The initial deployent process can be summarised as:
- deploy local kubernetes cluster
- install crossplane, dependencies and apis
- use local cluster to deploy management cluster manifests to EKS
- add management cluster details to
~/.kube/config
- repeat step 2 above on the management cluster
- add the existing vpc id's to the management cluster manifests and redeploy into management cluster
- add management cluster details to
The result should be a management cluster, that is running crossplane and managing itself (as well as other clusters).
This guide describes setting up a local microk8s cluster (in WSL2) to deploy crossplane, but can be adapted to use any local kubernetes flavour e.g. kind, k3s, minikube etc. Microk8s was chosen due to some issues with Rancher desktop messing with the local .kube/config
file.
It is recommended to create a local cluster soley for crossplane deployment that can be deleted after as stopping the cluster and uninstalling it is the simplest way to 'uncouble' the deployed resources from the temporary cluster (see section 6).
See MicroK8s docs for details.
TLDR:
cat /etc/wsl.conf`
# check [boot] section fro `systemd=true` if not:
# echo -e "[boot]\nsystemd=true" | sudo tee /etc/wsl.conf
#install
sudo snap install microk8s --classic
sudo usermod -a -G microk8s $USER
sudo microk8s enable storage
sudo microk8s enable rbac
# if you want to add the microk8s config to your existing kubectl run `sudo microk8s config`
# and copy the cluster, context and user to your .kube/config file.
Install using helm:
helm repo add crossplane-stable https://charts.crossplane.io/stable
helm repo update
helm install crossplane crossplane-stable/crossplane \
--set args='{"--enable-usages"}' \
--namespace crossplane-system --create-namespace
# check it's up and running:
kubectl get pods -n crossplane-system
A crossplane user has been created in AWS IAM. Create/use an access key and copy access id & secret into credential file then create a kubernees secret from the file - make SURE this file does NOT get commited to a git repository:
kubectl create secret generic aws-secret -n crossplane-system --from-file=creds=./aws-credentials.txt
For full details see the AWS Quickstart has section on setting up a kubernetes secret using an AWS Access Key linked to your account.
The necessary providers, configurations and functions can be installed via the crossplane/dependencies.yaml.
# Crossplane configurations for aws eks and networks install the necessary providers and functions.
kubectl apply -f crossplane/dependencies.yaml -n crossplane-system
kubectl apply -f crossplane/provider-config-aws.yaml -n crossplane-system
# Check that the providers are all installed and healthy:
kubectl get providers
# Install NoFrixion composite resource definitions and compositions (could package these at some point)
kubectl apply -f apis/aws/vpc/
kubectl apply -f apis/aws/eks-cluster/
- Note, some online examples refer to a 'monolithic'
provider-aws
package. This is being deprecated in June 2024. Theaws provider family
packages installed by using the dependencies file above should be used instead.
# Set secret name to the appopriate value
SECRETNAME=$(kubectl -n crossplane-system get secret -o name | grep "-eks-cluster-auth")
kubectl --namespace crossplane-system get $SECRETNAME --output jsonpath="{.data.kubeconfig}" | base64 -d >kubeconfig.txt
Once you have the kubeconfig data, merge the Cluster and Context details into an existing .kube/config
file. Change the user in the context
to be the azure user used for authenticating with other clusters.
Change context to the management cluster and check connectivity (requires VPN exit node) e.g. kubectl get nodes
or kubectl get namespaces
.
Change context to the management cluster and complete all steps in section 3 on the management cluster.
This step is very imporant. If a managed resource is deleted in crossplane, the real resource will be deleted in the cloud. It is possible to retain the cluster by patching the deletionPolicy
of all resources to orphan, but it is easier to use a temporary cluster and delete it. E.g.:
sudo microk8s stop
sudo snap remove microk8s