@@ -32,31 +32,72 @@ This tutorial demonstrates how to deploy RadonDB ClickHouse on Kubernetes.
3232
3333### Step 1 : Add Helm Repository
3434
35- Add and update helm repositor .
35+ Add and update helm repository .
3636
3737``` bash
38+ $ helm repo add < repoName> https://radondb.github.io/radondb-clickhouse-kubernetes/
39+ $ helm repo update
40+ ```
41+
42+ ** Expected output:**
43+
44+ ``` shell
3845$ helm repo add ck https://radondb.github.io/radondb-clickhouse-kubernetes/
46+ " ck" has been added to your repositories
47+
3948$ helm repo update
49+ Hang tight while we grab the latest from your chart repositories...
50+ ...Successfully got an update from the " ck" chart repository
51+ Update Complete. ⎈Happy Helming! ⎈
52+
4053` ` `
4154
4255# ## Step 2 : Install RadonDB ClickHouse Operator
4356
4457` ` ` bash
45- $ helm install clickhouse-operator ck/clickhouse-operator
58+ $ helm install --generate-name -n < Namespace > < repoName > / < appName >
4659` ` `
4760
61+ ** Expected output:**
62+
63+ ` ` ` shell
64+ $ helm install clickhouse-operator ck/clickhouse-operator -n kube-system
65+ NAME: clickhouse-operator
66+ LAST DEPLOYED: Wed Aug 17 14:43:44 2021
67+ NAMESPACE: kube-system
68+ STATUS: deployed
69+ REVISION: 1
70+ TEST SUITE: None
71+ ` ` `
72+
73+ > ** Notice**
74+ >
75+ > This command will install ClickHouse Operator in the namespace ` kube-system` . Therefore, ClickHouse Operator only needs to be installed once in a Kubernetes cluster.
76+
4877# ## Step 3 : Install RadonDB ClickHouse Cluster
4978
5079` ` ` bash
51- $ helm install clickhouse ck/clickhouse-cluster
80+ $ helm install --generate-name < repoName> /clickhouse-cluster -n < Namespace>
81+ ` ` `
82+
83+ ** Expected output:**
84+
85+ ` ` ` shell
86+ $ helm install clickhouse ck/clickhouse-cluster -n test
87+ NAME: clickhouse
88+ LAST DEPLOYED: Wed Aug 17 14:48:12 2021
89+ NAMESPACE: test
90+ STATUS: deployed
91+ REVISION: 1
92+ TEST SUITE: None
5293` ` `
5394
5495# ## Step 4 : Verification
5596
5697# ### Check the Status of Pod
5798
5899` ` ` bash
59- kubectl get pods -n < Namespace>
100+ $ kubectl get pods -n < Namespace>
60101` ` `
61102
62103** Expected output:**
@@ -66,10 +107,6 @@ $ kubectl get pods -n test
66107NAME READY STATUS RESTARTS AGE
67108pod/chi-ClickHouse-replicas-0-0-0 2/2 Running 0 3m13s
68109pod/chi-ClickHouse-replicas-0-1-0 2/2 Running 0 2m51s
69- pod/chi-ClickHouse-replicas-1-0-0 2/2 Running 0 2m34s
70- pod/chi-ClickHouse-replicas-1-1-0 2/2 Running 0 2m17s
71- pod/chi-ClickHouse-replicas-2-0-0 2/2 Running 0 115s
72- pod/chi-ClickHouse-replicas-2-1-0 2/2 Running 0 48s
73110pod/zk-clickhouse-cluster-0 1/1 Running 0 3m13s
74111pod/zk-clickhouse-cluster-1 1/1 Running 0 3m13s
75112pod/zk-clickhouse-cluster-2 1/1 Running 0 3m13s
@@ -88,10 +125,6 @@ $ kubectl get service -n test
88125NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
89126service/chi-ClickHouse-replicas-0-0 ClusterIP None < none> 8123/TCP,9000/TCP,9009/TCP 2m53s
90127service/chi-ClickHouse-replicas-0-1 ClusterIP None < none> 8123/TCP,9000/TCP,9009/TCP 2m36s
91- service/chi-ClickHouse-replicas-1-0 ClusterIP None < none> 8123/TCP,9000/TCP,9009/TCP 2m19s
92- service/chi-ClickHouse-replicas-1-1 ClusterIP None < none> 8123/TCP,9000/TCP,9009/TCP 117s
93- service/chi-ClickHouse-replicas-2-0 ClusterIP None < none> 8123/TCP,9000/TCP,9009/TCP 50s
94- service/chi-ClickHouse-replicas-2-1 ClusterIP None < none> 8123/TCP,9000/TCP,9009/TCP 13s
95128service/clickhouse-ClickHouse ClusterIP 10.96.137.152 < none> 8123/TCP,9000/TCP 3m14s
96129service/zk-client-clickhouse-cluster ClusterIP 10.107.33.51 < none> 2181/TCP,7000/TCP 3m13s
97130service/zk-server-clickhouse-cluster ClusterIP None < none> 2888/TCP,3888/TCP 3m13s
@@ -104,18 +137,18 @@ service/zk-server-clickhouse-cluster ClusterIP None <none>
104137You can directly connect to ClickHouse Pod with ` kubectl` .
105138
106139` ` ` bash
107- kubectl exec -it < pod name > -n < project name > -- clickhouse-client --user=< user name > --password=< user password >
140+ $ kubectl exec -it < podName > -n < Namespace > -- clickhouse-client --user=< userName > --password=< userPassword >
108141` ` `
109142
110143** Expected output:**
111144
112145` ` ` shell
113146$ kubectl get pods | grep clickhouse
114- clickhouse-s0-r0 -0 1/1 Running 0 8m50s
115- clickhouse-s0-r1 -0 1/1 Running 0 8m50s
147+ chi-ClickHouse-replicas-0-0 -0 1/1 Running 0 8m50s
148+ chi-ClickHouse-replicas-0-1 -0 1/1 Running 0 8m50s
116149
117- $ kubectl exec -it clickhouse-s0-r0-0 -- clickhouse client -u default --password=C1ickh0use --query=' select hostName()'
118- clickhouse-s0-r0-
150+ $ kubectl exec -it chi-ClickHouse-replicas-0-0-0 -- clickhouse- client -u clickhouse --password=c1ickh0use0perator --query=' select hostName()'
151+ chi-ClickHouse-replicas-0-0-0
119152` ` `
120153
121154# ## Use Service
@@ -124,32 +157,16 @@ The Service `spec.type` is `ClusterIP`, so you need to create a client to connec
124157
125158** Expected output:**
126159
127- ``` shell
160+ ` ` `
128161$ kubectl get service | grep clickhouse
129- clickhouse ClusterIP 10.96.71.193 < none> 9000/TCP,8123/TCP 12m
130- clickhouse-s0-r0 ClusterIP 10.96.40.207 < none> 9000/TCP,8123/TCP 12m
131- clickhouse-s0-r1 ClusterIP 10.96.63.179 < none> 9000/TCP,8123/TCP 12m
132-
133- $ cat client.yaml
134- apiVersion: v1
135- kind: Pod
136- metadata:
137- name: clickhouse-client
138- labels:
139- app: clickhouse-client
140- spec:
141- containers:
142- - name: clickhouse-client
143- image: tceason/clickhouse-server:v21.1.3.32-stable
144- imagePullPolicy: Always
145-
146- $ kubectl apply -f client.yaml
147- pod/clickhouse-client unchanged
148-
149- $ kubectl exec -it clickhouse-client -- clickhouse client -u default --password=C1ickh0use -h 10.96.71.193 --query=' select hostName()'
150- clickhouse-s0-r1-0
151- $ kubectl exec -it clickhouse-client -- clickhouse client -u default --password=C1ickh0use -h 10.96.71.193 --query=' select hostName()'
152- clickhouse-s0-r0-0
162+ clickhouse-ClickHouse ClusterIP 10.96.137.152 < none> 9000/TCP,8123/TCP 12m
163+ chi-ClickHouse-replicas-0-0 ClusterIP None < none> 9000/TCP,8123/TCP 12m
164+ chi-ClickHouse-replicas-0-1 ClusterIP None < none> 9000/TCP,8123/TCP 12m
165+
166+ $ kubectl exec -it clickhouse-ClickHouse -- clickhouse-client -u clickhouse --password=c1ickh0use0perator -h 10.96.137.152 --query=' select hostName()'
167+ chi-ClickHouse-replicas-0-1-0
168+ $ kubectl exec -it clickhouse-ClickHouse -- clickhouse-client -u clickhouse --password=c1ickh0use0perator -h 10.96.137.152 --query=' select hostName()'
169+ chi-ClickHouse-replicas-0-0-0
153170` ` `
154171
155172# # Persistence
@@ -161,5 +178,6 @@ In default, PVC mount on the `/var/lib/clickhouse` directory.
161178
1621792. You should create a PVC that is automatically bound to a suitable PersistentVolume(PV).
163180
164- > ** Note**
181+ > ** Notice**
182+ >
165183> PVC can use different PV, so using the different PV show the different performance.
0 commit comments