Learning more about blockchain and deep learning with GPUs and K8s.
For more info on specific components, see the additional docs.
- Ubuntu 18.04
- docker 20+
- k3sup 0.9+
- helm 3.5+
- kubectl 1.20+
- argocd 1.8+
Initializing the first node and ArgoCD consists of the following:
cd k3s/
./install-server.sh ethernetes.brxblx.io
# check the state of the helm release and apps for ArgoCD
export KUBECONFIG=${HOME}/.k3s/config.yaml
helm ls -n e8s-system
kubectl get pod -n e8s-system
kubectl get app -n e8s-systemFrom there you can add secrets and upgrade ArgoCD to use an Ingress with a TLS cert:
./bootstrap-server.sh ethernetes.brxblx.ioIf you need to add a worker node:
./add-node.sh my-new-host ethernetes.brxblx.io Note: all args after the second positional argument are passed to the
k3sup joincommand in case you want to provide additional configuration.
First, prepare the miner config for a particular host or node:
numGPUs=2
hostname=brx-01aAny time a new host is configured by the NVIDIA GPU Operator,
a miner using 1 GPU is automatically deployed to the existing ethernetes cluster via
an ExtendedDaemonSet.
To customize the number of GPUs per node you can do so by appending to the
ExtendedDaemonsetSettings managed using ArgoCD and Helm:
git checkout main
git pull --rebase origin main
git checkout -b miner-${hostname}
cat <<EOF >> gitops/deploys/application-miner-hayden-desktop.yaml
- name: ${hostname}
nodeSelector:
kubernetes.io/hostname: ${hostname}
gpus: ${numGPUs}
EOF
git add gitops/deploys/application-miner-hayden-desktop.yaml
git commit -m "Deploying a New Miner to ${hostname}"
gh pr create --web --base mainOnce the PR is merged, the miner will be deployed via ArgoCD:
argocd app sync deploys
argocd app sync minerNote: This process will ideally be automated by a
MiningSetcontroller which auto-discovers the number of GPUs per node (or node selectors) and schedules miners via anExtendedDaemonSet.
You can deploy to a particular host on your own GPU-enabled cluster using Helm and a StatefulSet:
kubectl create ns ethereum
cat <<EOF > miner.yaml
miningPools:
- us1.ethermine.org
- us2.ethermine.org
- eu1.ethermine.org
nodeSelector:
kubernetes.io/hostname: ${hostname}
resources:
limits:
nvidia.com/gpu: ${numGPUs}
EOF
helm upgrade --install ethereum-miner charts/miner \
--wait \
-n ethereum
-f miner.yaml
helm test --logs -n ethereum ethereum-minerApplication manifests for ArgoCD live underneath the gitops/ folder
of this repo. You can access ArgoCD via the CLI:
argocd login --grpc-web cd.brxblx.io:443
argocd app listgitops/bootstrap/ describes the cluster namespaces, controllers,
and operators needed for ingress, storage and logs, TLS, and leveraging GPUs:
argocd app get bootstrapgitops/deploys/ describes the namespaces and manifests for deploying the
monitoring stack (i.e. DataDog and Elastic), and the deployments of Ethereum miners,
private blockchain nodes, web apps, and more:
argocd app get deploysVisit cd.brxblx.io to explore and manage the apps via the UI:
You can see logs from all the miners in the existing cluster here:
Explore the cluster's logs at search.brxblx.io.
Using DataDog, it's easy to visualize the health of the miners with respect to the GPUs and system resources:


