These manifests are supported by 4 Raspberry Pi 4s with 4GB RAM, a Beelink Mini S with a N5095 CPU and 8GB RAM and Synology ds723+.
-
download latest vers of raspberry pi OS (e.g. https://www.raspberrypi.com/software/operating-systems/)
-
flash sd card
-
create empty ssh file under
/boot/
touch ssh
-
connect via ssh
ssh pi@<pi_ip>
-
configure static ip via router; you'll also want to do this via
/etc/dhcpcd.conf
file. -
set password
passwd
-
set hostname (e.g. master/worker)
sudo vi /etc/hostname
sudo vi /etc/hosts
-
upgrade / reboot
sudo apt-get update && sudo apt-get -y dist-upgrade && sudo reboot
-
enable container features by adding the following to
/boot/cmdline.txt
:cgroup_enable=cpuset cgroup_memory=1 cgroup_enable=memory
-
edit
/etc/dhcp/dhclient.conf
-
static ip_address=<pi_ip>/24 static routers=<router_ip> static domain_name_servers=<router_ip>
-
-
configure poe hat fan control via
/boot/config.txt
and use/opt/vc/bin/vcgencmd measure_temp
to check temp (the config may be diff depending on poe hat used):-
dtoverlay=i2c-fan,emc2301`
-
no matter what, the nfs-common
package must be installed on all nodes unless a node acts as the primary, otherwise the nfs-kernel-server
package must be installed.
- enable the folowing on the synology nas:
- follow the nfs-subdir-external-provisioner steps below for automated provisioning
- list all connected devices and find the correct drive:
sudo fdisk -l
- create partition
sudo mkfs.ext4 /dev/sda1
- mount the disk manually
sudo mkdir <nfs_path>
sudo chown -R pi:pi <nfs_path>/
sudo mount /dev/sda1 <nfs_path>
- configure disk to automatically mount
- find the uuid of your mounted drive
sudo blkid
- add the following with the correct uuid to
/etc/fstab
UUID=23e4863c-6568-4dd1-abde-0b128a81b0ba <nfs_path> ext4 defaults 0 0
- reboot and make sure the drive has mount
df -ha /dev/sda1
- find the uuid of your mounted drive
- configure nfs
- install nfs on master
sudo apt-get install nfs-kernel-server -y
- add the following to
/etc/exports
<nfs_path> *(rw,no_root_squash,insecure,async,no_subtree_check,anonuid=1000,anongid=1000)
- start the nfs server
sudo exportfs -ra
- install nfs on workers
sudo apt-get install nfs-common -y
- create directory to mount nfs share
sudo mkdir <nfs_path>
sudo chown -R pi:pi <nfs_path>/
- configure disk to automatically mount by adding the master's ip etc to
/etc/fstab
sudo vi /etc/fstab
<master_ip>:<nfs_path> <nfs_path> nfs rw 0 0
- install nfs on master
- ssh to master node
ssh pi@kube-master
- if you're not root, you'll want to enable the ability to write to the k3s config file
/etc/rancher/k3s/k3s.yaml
. you'll also want to tell k3s not to deploy its default load balancer, servicelb, and proxy, traefik, since we'll install metallb as load balancer and nginx as proxy manually later on. finally we want to run the k3s installerexport K3S_KUBECONFIG_MODE="644"; export INSTALL_K3S_EXEC="--disable servicelb --disable traefik --kubelet-arg=container-log-max-files=5 --kubelet-arg=container-log-max-size=50Mi --kubelet-arg=image-gc-high-threshold=85 --kubelet-arg=image-gc-low-threshold=80"; curl -sfL https://get.k3s.io | sh -
- verify the master is up
sudo systemctl status k3s
kubectl get nodes -o wide
kubectl get pods -A -o wide
- taint the master node to avoid deploying to it / save resources for orchestration
kubectl taint node kube-master node-role.kubernetes.io/master:NoSchedule
- save the access token to configure the agents
sudo cat /var/lib/rancher/k3s/server/node-token
for my x86 worker node, a beelink mini s with an n5095, i had to install:
apt-get install apparmor apparmor-utils
- ssh to work node
ssh pi@kube-worker1
- set permissions on config file, set the endpoint for the agent, set the token saved from configuring the k3s master node, and run the k3s installer
export K3S_KUBECONFIG_MODE="644"; export K3S_URL="https://<master_ip>:6443"; export K3S_TOKEN=<master_node_token>; export INSTALL_K3S_EXEC="--kubelet-arg=container-log-max-files=5 --kubelet-arg=container-log-max-size=50Mi --kubelet-arg=image-gc-high-threshold=85 --kubelet-arg=image-gc-low-threshold=80"; curl -sfL https://get.k3s.io | sh -
- verify agent is up
sudo systemctl status k3s-agent
kubectl get nodes -o wide
kubectl get pods -A -o wide
- label the worker nodes
kubectl label node <worker_name> node-role.kubernetes.io/node=""
- if mixing cpu architectures, include
nodeSelector
ornodeAffinity
to ensure workloads get deployed to the relevant node.
- install
kubectl
if it's not already installed local computer, Install Guide. - create the necessary directory and file
mkdir ~/.kube/
touch ~/.kube/config
- copy the file using
scp
scp pi@<master_ip>:/etc/rancher/k3s/k3s.yaml ~/.kube/config
- you can either simply edit the
config
file and locate127.0.0.1
and replace it with the IP address of the master node or usesed
sed -i '' 's/127\.0\.0\.1/192\.168\.1\.1/g' ~/.kube/config
- install envsubst; check your local os and follow accordingly
- create
.env
touch .env
/ copy the below with the correct values:
# hosts
export UNIFI_HOST="blah"
export FILEBROWSER_HOST="blah"
export PLEX_HOST="blah"
# internal ips
export METAL_LB_IP1="blah"
export METAL_LB_IP11="blah"
export NFS_IP="blah"
# secrets
export NINJAM_USER="blah"
export NINJAM_PASSWORD="blah"
export FILEBROWSER_USER="blah"
export FILEBROWSER_PW="blah"
export MONGO_PASS="blah"
export SOULSEEK_VPN_KEY=$(echo -n "blah" | base64)
export TRANSMISSION_VPN_KEY=$(echo -n "blah" | base64)
export JACKETT_VPN_KEY=$(echo -n "blah" | base64)
export TS_CLIENT_ID="blah"
export TS_CLIENT_SECRET="blah"
- make sure to source
.env
when a k8s resource needs creds:source .env
- example cmds:
envsubst < media/unifi/unifi.statefulset.yml | kubectl apply -f -
- apply the metallb manifest which includes the namespace, controller deployment, speaker daemonset and necessary service accounts for the controller and speaker, along with the RBAC permissions that everything need to function
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v<latest_vers>/config/manifests/metallb-native.yaml
- apply the
CRDs
which will indicate what protocol (e.g.layer2
) and IPs to use.envsubst < metallb/config.yml | kubectl apply -f -
- install helm
- add the nginx repo / update repo
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx; helm repo update
- install nginx
helm install ingress-nginx ingress-nginx/ingress-nginx --namespace ingress-nginx --create-namespace
- add the cert-manager repo / update repo
helm repo add jetstack https://charts.jetstack.io; helm repo update
- install cert-manager
helm install cert-manager jetstack/cert-manager --namespace cert-manager --create-namespace --version <latest_vers> --set startupapicheck.timeout=5m --set installCRDs=true --set webhook.hostNetwork=true --set webhook.securePort=10260
- configure the certificate issuers be sure to forward port 80 for the cert challenge
$ cat <<EOF | kubectl apply -f -
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: letsencrypt-staging
spec:
acme:
email: <EMAIL>
server: https://acme-staging-v02.api.letsencrypt.org/directory
privateKeySecretRef:
name: letsencrypt-staging
solvers:
- http01:
ingress:
ingressClassName: nginx
EOF
$ cat <<EOF | kubectl apply -f -
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: letsencrypt-prod
spec:
acme:
email: <EMAIL>
server: https://acme-v02.api.letsencrypt.org/directory
privateKeySecretRef:
name: letsencrypt-prod
solvers:
- http01:
ingress:
ingressClassName: nginx
EOF
- create namespace
kubectl apply -f unifi/unifi.ns.yml
- apply pvc
kubectl apply -f unifi/unifi.pvc.yml
- apply service, statefulset and ingress resources
envsubst < unifi/unifi.service.yml | kubectl apply -f -
kubectl apply -f unifi/unifi.statefulset.yml
envsubst < unifi/unifi.ingress.yml | kubectl apply -f -
- allow internal access by sshing to router (e.g. edgerouterx example below)
configure
set system static-host-mapping host-name <sub-domain> inet ${METAL_LB_IP1}
commit
save
- allow external access by forwarding
443
for nginx on router- TCP
443
/ GUI
- TCP
- create namespace
kubectl apply -f filebrowser/filebrowser.ns.yml
- apply pvc
kubectl apply -f filebrowser.pvc.yml
- apply service, deployment and ingress resources
envsubst < filebrowser/filebrowser.service.yml | kubectl apply -f -
kubectl apply -f filebrowser/filebrowser.deployment.yml
envsubst < filebrowser/filebrowser.ingress.yml | kubectl apply -f -
- allow internal access by sshing to router
configure
set system static-host-mapping host-name <sub-domain> inet ${METAL_LB_IP1}
commit
save
-
create namespace
kubectl apply -f media/media.ns.yml
-
apply pvc(s)
kubectl apply -f media/media-config.pvc.yml
kubectl apply -f media/media-data.pvc.yml
-
apply ingress
envsubst < media/media.ingress.yml | kubectl apply -f -
-
create secret for vpn
envsubst < media/vpn_secret.yml | kubectl apply -f -
-
apply transmission resources
kubectl apply -f media/transmission/transmission.service.yml
kubectl apply -f media/transmission/transmission.deployment.yml
-
create a file called
ServerConfig.json
with the following in<nfs_path>/jackett/Jackett
:-
{ "BasePathOverride": "/jackett" }
-
-
apply jackett resources
kubectl apply -f media/jackett/jackett.service.yml
envsubst < media/jackett/jackett.deployment.yml | kubectl apply -f -
-
create a file called
config.xml
with the following in<nfs_path>/sonarr/
:-
<Config> <UrlBase>/sonarr</UrlBase> </Config>
-
-
apply sonarr resources
kubectl apply -f media/sonarr/sonarr.service.yml -n media
kubectl apply -f media/sonarr/sonarr.deployment.yml -n media
-
create a file called
config.xml
with the following in<nfs_path>/radarr/
:-
<Config> <UrlBase>/radarr</UrlBase> </Config>
-
-
apply radarr resources
kubectl apply -f media/radarr/radarr.service.yml -n media
kubectl apply -f media/radarr/radarr.deployment -n media
-
configuring jackett
- add indexers to jackett
- keep notes of the category #s as those are used in radarr and sonarr
-
configuring radarr and sonarr
- configure the connection to transmission in settings under
Download Client
>+
(add transmission) using the hostname and porttransmission.media:80
- add indexers in settings under
Indexers
>+
(add indexer)- add the URL /
http://media.tools/jackett/api/v2.0/indexers/<name>/results/torznab/
, API key (found in jackett) and categories (e.g.2000
for movies and5000
for tv)
- add the URL /
- configure the connection to transmission in settings under
this is an optional step if you'd like the creation of persistent volume claims to be automated.
- add the nfs-provisioner repo
helm repo add nfs-subdir-external-provisioner https://kubernetes-sigs.github.io/nfs-subdir-external-provisioner
- ensure the correct values are present in the
nfs-provisioner/*.values.yml
file(s) - install nfs-provisioner for each respective nfs path:
envsubst < nfs-provisioner/video.storage.values.yml | helm install nfs-subdir-external-provisioner-video nfs-subdir-external-provisioner/nfs-subdir-external-provisioner --namespace nfs-provisioner --values -
nfs-provisioner/rpik3s-config.storage.values.yml | helm install nfs-subdir-external-provisioner-rpik3s nfs-subdir-external-provisioner/nfs-subdir-external-provisioner --namespace nfs-provisioner --values -
nfs-provisioner/music.storage.values.yml | helm install nfs-subdir-external-provisioner-music nfs-subdir-external-provisioner/nfs-subdir-external-provisioner --namespace nfs-provisioner --values -
- finally, apply pvcs w/ the appropriate
storageClass
(e.g.nfs-rpik3s
/nfs-video
/nfs-music
) and watch them provision automatically
install system-upgrade-controller
- apply system-upgrade-controller
kubectl apply -f system-upgrade/system-upgrade-controller.yml
- taint the master node to allow the controller to run:
kubectl taint node kube-master CriticalAddonsOnly=true:NoExecute
-
confirm taint(s): -
kubectl get node kube-master -o=jsonpath='{.spec.taints}'
-
when ready to update the images used in the
system-upgrade/config.yml
file and then apply:
kubectl apply -f system-upgrade/config.yml
- apply ninjam-server
kubectl apply -f ninjam-server/ninjam.pvc.yml
envsubst < ninjam-server/ninjam.service.yml | kubectl apply -f -
envsubst < ninjam-server/ninjam.ingress.yml | kubectl apply -f -
envsubst < ninjam-server/ninjam.configmap.yml | kubectl apply -f -
kubectl apply -f ninjam-server/ninjam.deployment.yml
kubectl apply -f ninjam-server/ninjam.cronjob.yml
- apply soulseek
envsubst < soulseek/soulseek.service.yml | kubectl apply -f -
kubectl apply -f soulseek/soulseek.deployment.yml
- apply changedetection
kubectl apply -f changedetection/change.ns.yml
kubectl apply -f changedetection/change.pvc.yml
envsubst < changedetection/change.service.yml | kubectl apply -f -
kubectl apply -f changedetection/selenium.service.yml
kubectl apply -f changedetection/selenium.deployment.yml
kubectl apply -f changedetection/change.deployment.yml
- add repo / update
helm repo add tailscale https://pkgs.tailscale.com/helmcharts; helm repo update
- install tailscale
envsubst < provision-cluster/tailscale/tailscale.values.yml | helm upgrade tailscale-operator tailscale/tailscale-operator -n tailscale --values -
k3s client/server certs are valid for 365 days and any that are expired, or within 90 days of expiring, are automatically renewed every time k3s starts. in other words, access to cluster will cease until local kube-config
certs are updated:
to disable:
sudo systemctl stop k3s.service
hwclock --verbose
sudo timedatectl set-ntp 0
sudo systemctl stop systemd-timesyncd.service
sudo systemctl status systemd-timesyncd.service
sudo date $(date "+%m%d%H%M%Y" --date="90 days ago")
sudo systemctl start k3s.service
to renable:
sudo systemctl stop k3s.service
sudo systemctl start systemd-timesyncd.service
sudo date $(date "+%m%d%H%M%Y" --date="now")
sudo timedatectl set-ntp 1
make a copy of /var/lib/rancher/k3s/server/
- master
sudo /usr/local/bin/k3s-uninstall.sh
- workers
sudo /usr/local/bin/k3s-agent-uninstall.sh
journalctl -u k3s.service -e
last logs of the serverjournalctl -u k3s-agent.service -e
last logs of the agent