Creating a ClickHouse data store on Linux on Power (ppc64le)
Install the ClickHouse operator and set up the data store.
After the initial creation of a ClickHouse cluster or after the number of shards are increased, the cluster might need a few minutes to attach the shards.
Before you begin
Make sure that you prepared your online and offline host to pull images from the external repository. Also, ensure that you added the Helm repo.
For more information, see Preparing to install data store operators.
Installing ClickHouse online
Complete these steps to install the ClickHouse data store.
-
Check whether the
instana-clickhouse
namespace exists in your cluster. If it does not exist, you can create it now.- Check whether the
instana-clickhouse
namespace exists.kubectl get namespace | grep clickhouse
- If the
instana-clickhouse
namespace does not exist, create it now.kubectl create namespace instana-clickhouse
- Check whether the
-
If you are using a Red Hat OpenShift cluster, create an SCC before you deploy the ClickHouse operator. To create SCCs, Create a YAML file, for example
clickhouse-scc.yaml
, with the SCC definition.apiVersion: security.openshift.io/v1 kind: SecurityContextConstraints metadata: name: clickhouse-scc runAsUser: type: MustRunAs uid: 1001 seLinuxContext: type: RunAsAny fsGroup: type: RunAsAny allowHostDirVolumePlugin: false allowHostNetwork: true allowHostPorts: true allowPrivilegedContainer: false allowHostIPC: true allowHostPID: true readOnlyRootFilesystem: false users: - system:serviceaccount:instana-clickhouse:clickhouse-operator - system:serviceaccount:instana-clickhouse:clickhouse-operator-altinity-clickhouse-operator - system:serviceaccount:instana-clickhouse:default
-
Create image pull secrets for the ClickHouse image. Update the
<download_key>
value with your own download key.kubectl create secret docker-registry instana-registry \ --namespace=instana-clickhouse \ --docker-username=_ \ --docker-password=<download_key> \ --docker-server=artifact-public.instana.io
-
Create the SCC resource.
kubectl apply -f clickhouse-scc.yaml
-
Install the ClickHouse operator.
helm install clickhouse-operator instana/ibm-clickhouse-operator -n instana-clickhouse --version=1.1.0 --set operator.image.repository=artifact-public.instana.io/clickhouse-operator --set operator.image.tag=v1.1.0 --set operator.imagePullPolicy=Always --set imagePullSecrets[0].name="instana-registry"
-
Decide a default password and a password for the ClickHouse user. In a later step, you create a
ClickHouseInstallation
CR in which you must specify the passwords in thespec.configuration.users
section. You can use either a clear-text password, or anSHA256
orSHA1
hash password.-
Clear-text
Generate a
default_password
and aclickhouse_user_password
. Later, you use these passwords for creating a secret. Run the following command twice to generate the two passwords.PASSWORD=$(base64 < /dev/urandom | head -c16); echo "Password: $PASSWORD";
-
SHA256 hash
Generate a
default_password
and aclickhouse_user_password
, and their correspondingSHA256
hashes. Run the following command twice to generate the two passwords. Later, you use these passwords for creating a secret.PASSWORD=$(base64 < /dev/urandom | head -c16); echo "Password: $PASSWORD"; HEX=$(echo -n "$PASSWORD" | sha256sum | tr -d '-'); echo "SHA256: $HEX"
-
SHA1 hash
Generate a
default_password
andclickhouse_user_password
, and their correspondingSHA1
hashes. Run the following command twice to generate the two passwords. Later, you use these passwords for creating a secret.PASSWORD=$(base64 < /dev/urandom | head -c8); echo "$PASSWORD"; echo -n "$PASSWORD" | sha1sum | tr -d '-' | xxd -r -p | sha1sum | tr -d '-'
-
-
Generate a
Secret
resource by using the passwords that you created in the previous step.apiVersion: v1 kind: Secret metadata: name: clickhouse-user-passwords type: Opaque data: default_password: <default_password> # base64 encoded value for default_password clickhouse_user_password: <password> # base64 encoded value for password
-
Create a YAML file, for example
clickhouse_installation.yaml
, with theClickHouseInstallation
resource definition.- Update the
zookeeper.nodes.host
field with the hostname of your ZooKeeper cluster. - If you are using a hash as a password, update the
default_password
andclickhouse_user_password
in thespec.configuration.users
section.
apiVersion: "clickhouse.altinity.com/v1" kind: "ClickHouseInstallation" metadata: name: "instana" spec: configuration: files: config.d/storage.xml: | <clickhouse> <storage_configuration> <disks> <default/> <cold_disk> <path>/var/lib/clickhouse-cold/</path> </cold_disk> </disks> <policies> <logs_policy> <volumes> <data> <disk>default</disk> </data> <cold> <disk>cold_disk</disk> </cold> </volumes> </logs_policy> <logs_policy_v4> <volumes> <tier1> <disk>default</disk> </tier1> <tier2> <disk>cold_disk</disk> </tier2> </volumes> </logs_policy_v4> </policies> </storage_configuration> </clickhouse> clusters: - name: local templates: podTemplate: clickhouse layout: shardsCount: 1 replicasCount: 2 schemaPolicy: replica: None shard: None zookeeper: nodes: - host: instana-zookeeper-headless.instana-clickhouse profiles: default/max_memory_usage: 10000000000 default/joined_subquery_requires_alias: 0 default/max_execution_time: 100 default/max_query_size: 1048576 default/use_uncompressed_cache: 0 default/enable_http_compression: 1 default/load_balancing: random default/background_pool_size: 32 default/background_schedule_pool_size: 32 default/distributed_directory_monitor_split_batch_on_failure: 1 default/distributed_directory_monitor_batch_inserts: 1 default/insert_distributed_sync: 1 default/log_queries: 1 default/log_query_views: 1 default/max_threads: 16 default/allow_experimental_database_replicated: 1 default/allow_experimental_analyzer: 0 quotas: default/interval/duration: 3600 default/interval/queries: 0 default/interval/errors: 0 default/interval/result_rows: 0 default/interval/read_rows: 0 default/interval/execution_time: 0 settings: remote_servers/all-sharded/secret: clickhouse-default-pass remote_servers/all-replicated/secret: clickhouse-default-pass remote_servers/local/secret: clickhouse-default-pass max_concurrent_queries: 200 max_table_size_to_drop: 0 max_partition_size_to_drop: 0 users: default/password: valueFrom: secretKeyRef: name: clickhouse-user-passwords key: default_password clickhouse-user/networks/ip: "::/0" clickhouse-user/password: valueFrom: secretKeyRef: name: clickhouse-user-passwords key: clickhouse_user_password templates: podTemplates: - name: clickhouse spec: initContainers: - name: waiting-configmap image: registry.access.redhat.com/ubi9/ubi-minimal:latest command: ['sh', '-c', 'echo Waiting for ConfigMap readiness && sleep 840'] containers: - name: instana-clickhouse image: artifact-public.instana.io/clickhouse:23.3.2.37-4-lts-ibm imagePullPolicy: Always command: - clickhouse-server - --config-file=/etc/clickhouse-server/config.xml volumeMounts: - name: instana-clickhouse-data-cold-volume mountPath: /var/lib/clickhouse-cold/ - name: clickhouse-log image: registry.access.redhat.com/ubi9/ubi-minimal:latest args: - while true; do sleep 30; done; command: - /bin/sh - -c - -- securityContext: fsGroup: 0 runAsGroup: 0 runAsUser: 1001 imagePullSecrets: - name: instana-registry volumeClaimTemplates: - name: instana-clickhouse-data-volume spec: accessModes: - ReadWriteOnce resources: requests: storage: 200Gi - name: instana-clickhouse-log-volume spec: accessModes: - ReadWriteOnce resources: requests: storage: 100Gi - name: instana-clickhouse-data-cold-volume spec: accessModes: - ReadWriteOnce resources: requests: storage: 60Gi serviceTemplates: - name: service-template generateName: "clickhouse-{chi}" spec: ports: - name: http port: 8123 - name: tcp port: 9000 type: ClusterIP
- Update the
-
Complete the steps in Deploying and verifying ClickHouse (online and offline).
Installing ClickHouse offline
If you didn't yet pull the ClickHouse images from the external registry when you prepared for installation, you can pull them now. Run the following commands on your bastion host. Then, copy the images to your Instana host that is in your air-gapped environment.
docker pull artifact-public.instana.io/clickhouse-operator:v1.1.0
docker pull artifact-public.instana.io/clickhouse:23.3.2.37-4-lts-ibm
docker pull registry.access.redhat.com/ubi9/ubi-minimal:latest
Complete the following steps on your Instana host.
-
Retag the images to your internal image registry.
docker tag artifact-public.instana.io/clickhouse-operator:v1.1.0 <internal-image-registry>/clickhouse-operator:v1.1.0 docker tag artifact-public.instana.io/clickhouse:23.3.2.37-4-lts-ibm <internal-image-registry>/clickhouse:23.3.2.37-4-lts-ibm docker tag registry.access.redhat.com/ubi9/ubi-minimal:latest <internal-image-registry>/ubi9/ubi-minimal:latest
-
Push the images to your internal image registry.
docker push <internal-image-registry>/clickhouse-operator:v1.1.0 docker push <internal-image-registry>/clickhouse:23.3.2.37-4-lts-ibm docker push <internal-image-registry>/ubi9/ubi-minimal:latest
-
To install the ZooKeeper instances, you need the
instana-clickhouse
namespace. Check whether theinstana-clickhouse
namespace exists in your cluster. If it does not exist, you can create it now.- Check whether the
instana-clickhouse
namespace exists.kubectl get namespace | grep clickhouse
- If the
instana-clickhouse
namespace does not exist, create it now.kubectl create namespace instana-clickhouse
- Check whether the
-
Optional: Create an image pull secret if your internal image registry needs authentication.
kubectl create secret docker-registry <secret_name> --namespace instana-clickhouse \ --docker-username=<registry_username> \ --docker-password=<registry_password> \ --docker-server=<internal-image-registry>:<internal-image-registry-port> \ --docker-email=<registry_email>
-
Install the ClickHouse operator. If you created an image pull secret in the previous step, add
--set imagePullSecrets[0].name="<internal-image-registry-pull-secret>"
to the following command.helm install clickhouse-operator ibm-clickhouse-operator-1.1.0.tgz -n instana-clickhouse --version=1.1.0 --set operator.image.repository=<internal-image-registry>/clickhouse-operator --set operator.image.tag=v1.1.0
-
Create a YAML file, for example
clickhouse-scc.yaml
, with the SecurityContextConstraints (SCC) definition.apiVersion: security.openshift.io/v1 kind: SecurityContextConstraints metadata: name: clickhouse-scc runAsUser: type: MustRunAs uid: 1001 seLinuxContext: type: RunAsAny fsGroup: type: RunAsAny allowHostDirVolumePlugin: false allowHostNetwork: true allowHostPorts: true allowPrivilegedContainer: false allowHostIPC: true allowHostPID: true readOnlyRootFilesystem: false users: - system:serviceaccount:instana-clickhouse:clickhouse-operator - system:serviceaccount:instana-clickhouse:clickhouse-operator-altinity-clickhouse-operator - system:serviceaccount:instana-clickhouse:default
-
Create the SCC resource.
kubectl apply -f clickhouse-scc.yaml
-
Decide a default password and a password for the ClickHouse user. In a later step, you create a
ClickHouseInstallation
CR in which you must specify the passwords in thespec.configuration.users
section. You can use either a clear-text password, or anSHA256
orSHA1
hash password.-
Clear-text
Generate a
default_password
and aclickhouse_user_password
. Later, you use these passwords for creating a secret. Run the following command twice to generate the two passwords.PASSWORD=$(base64 < /dev/urandom | head -c16); echo "Password: $PASSWORD";
-
SHA256 hash
Generate a
default_password
and aclickhouse_user_password
, and their correspondingSHA256
hashes. Run the following command twice to generate the two passwords. Later, you use these passwords for creating a secret.PASSWORD=$(base64 < /dev/urandom | head -c16); echo "Password: $PASSWORD"; HEX=$(echo -n "$PASSWORD" | sha256sum | tr -d '-'); echo "SHA256: $HEX"
-
SHA1 hash
Generate a
default_password
andclickhouse_user_password
, and their correspondingSHA1
hashes. Run the following command twice to generate the two passwords. Later, you use these passwords for creating a secret.PASSWORD=$(base64 < /dev/urandom | head -c8); echo "$PASSWORD"; echo -n "$PASSWORD" | sha1sum | tr -d '-' | xxd -r -p | sha1sum | tr -d '-'
-
-
Generate a
Secret
resource by using the passwords that you created in the previous step.apiVersion: v1 kind: Secret metadata: name: clickhouse-user-passwords type: Opaque data: default_password: <default_password> # base64 encoded value for default_password clickhouse_user_password: <password> # base64 encoded value for password
-
Create a YAML file, for example
clickhouse_installation.yaml
, with theClickHouseInstallation
resource definition.
- Update the
zookeeper.nodes.host
field with the hostname of your ZooKeeper cluster. - If you are using a hash as a password, update the
default_password
andclickhouse_user_password
in thespec.configuration.users
section.
apiVersion: "clickhouse.altinity.com/v1"
kind: "ClickHouseInstallation"
metadata:
name: "instana"
spec:
defaults:
templates:
dataVolumeClaimTemplate: instana-clickhouse-data-volume
logVolumeClaimTemplate: instana-clickhouse-log-volume
serviceTemplate: service-template
configuration:
files:
config.d/storage.xml: |
<clickhouse>
<storage_configuration>
<disks>
<default/>
<cold_disk>
<path>/var/lib/clickhouse-cold/</path>
</cold_disk>
</disks>
<policies>
<logs_policy>
<volumes>
<data>
<disk>default</disk>
</data>
<cold>
<disk>cold_disk</disk>
</cold>
</volumes>
</logs_policy>
<logs_policy_v4>
<volumes>
<tier1>
<disk>default</disk>
</tier1>
<tier2>
<disk>cold_disk</disk>
</tier2>
</volumes>
</logs_policy_v4>
</policies>
</storage_configuration>
</clickhouse>
clusters:
- name: local
templates:
podTemplate: clickhouse
layout:
shardsCount: 1
replicasCount: 2
schemaPolicy:
replica: None
shard: None
zookeeper:
nodes:
- host: instana-zookeeper-headless.instana-clickhouse
profiles:
default/max_memory_usage: 10000000000
default/joined_subquery_requires_alias: 0
default/max_execution_time: 100
default/max_query_size: 1048576
default/use_uncompressed_cache: 0
default/enable_http_compression: 1
default/load_balancing: random
default/background_pool_size: 32
default/background_schedule_pool_size: 32
default/distributed_directory_monitor_split_batch_on_failure: 1
default/distributed_directory_monitor_batch_inserts: 1
default/insert_distributed_sync: 1
default/log_queries: 1
default/log_query_views: 1
default/max_threads: 16
default/allow_experimental_database_replicated: 1
default/allow_experimental_analyzer: 0
quotas:
default/interval/duration: 3600
default/interval/queries: 0
default/interval/errors: 0
default/interval/result_rows: 0
default/interval/read_rows: 0
default/interval/execution_time: 0
settings:
remote_servers/all-sharded/secret: clickhouse-default-pass
remote_servers/all-replicated/secret: clickhouse-default-pass
remote_servers/local/secret: clickhouse-default-pass
max_concurrent_queries: 200
max_table_size_to_drop: 0
max_partition_size_to_drop: 0
users:
default/password:
valueFrom:
secretKeyRef:
name: clickhouse-user-passwords
key: default_password
clickhouse-user/networks/ip: "::/0"
clickhouse-user/password:
valueFrom:
secretKeyRef:
name: clickhouse-user-passwords
key: clickhouse_user_password
templates:
podTemplates:
- name: clickhouse
spec:
initContainers:
- name: waiting-configmap
image: <internal-image-registry>/ubi9/ubi-minimal:latest
command: ['sh', '-c', 'echo Waiting for ConfigMap readiness && sleep 840']
containers:
- name: instana-clickhouse
image: <internal-image-registry>/clickhouse:23.3.2.37-4-lts-ibm
imagePullPolicy: Always
command:
- clickhouse-server
- --config-file=/etc/clickhouse-server/config.xml
volumeMounts:
- mountPath: /var/lib/clickhouse-cold/
name: instana-clickhouse-data-cold-volume
- name: clickhouse-log
image: <internal-image-registry>/ubi9/ubi-minimal:latest
args:
- while true; do sleep 30; done;
command:
- /bin/sh
- -c
- --
securityContext:
fsGroup: 0
runAsGroup: 0
runAsUser: 1001
# Optional: if you created an image pull secret for your internal registry, uncomment the following lines and update the image pull secret information.
# imagePullSecrets:
# - name: <internal-image-registry-pull-secret>
volumeClaimTemplates:
- name: instana-clickhouse-data-volume
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 200Gi
- name: instana-clickhouse-log-volume
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 60Gi
- name: instana-clickhouse-data-cold-volume
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 60Gi
serviceTemplates:
- name: service-template
generateName: "clickhouse-{chi}"
spec:
ports:
- name: http
port: 8123
- name: tcp
port: 9000
type: ClusterIP
- Complete the steps in Deploying and verifying ClickHouse (online and offline).
Deploying and verifying ClickHouse (online and offline)
Complete these steps to deploy the ClickHouse instance and create the data store.
-
To deploy ClickHouse, run the following command:
kubectl apply -f clickhouse_installation.yaml -n instana-clickhouse
-
In the
config.yaml
file, store the ClickHouse user password.datastoreConfigs: ... clickhouseConfigs: - user: clickhouse-user password: <USER_GENERATED_PASSWORD> adminUser: clickhouse-user adminPassword: <USER_GENERATED_PASSWORD> ...
-
If you want to configure the ClickHouse cluster in the core configuration, do not enter the load-balancer service of the entire cluster. Add just the individual node services by using the following scheme. Replica 1 and 2 on ascending to shard:
spec: ... datastoreConfigs: clickhouseConfigs: - clusterName: local authEnabled: true hosts: - chi-instana-local-0-0-0.instana-clickhouse - chi-instana-local-0-1-0.instana-clickhouse - chi-instana-local-1-0-0.instana-clickhouse - chi-instana-local-1-1-0.instana-clickhouse - chi-instana-local-n-0-0.instana-clickhouse - chi-instana-local-n-1-0.instana-clickhouse ...
-
Verify the ClickHouse operator deployment.
kubectl get all -n instana-clickhouse
If the ClickHouse operator is deployed successfully, the command output shows the operator status as
Running
as shown in the following example:NAME READY STATUS RESTARTS AGE pod/chi-instana-local-0-0-0 2/2 Running 0 8m46s pod/chi-instana-local-0-1-0 2/2 Running 0 8m5s pod/clickhouse-operator-altinity-clickhouse-operator-86b5f9b57689rr 2/2 Running 0 70m pod/instana-zookeeper-0 1/1 Running 0 4h3m pod/instana-zookeeper-1 1/1 Running 0 4h2m pod/instana-zookeeper-2 1/1 Running 0 4h2m NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/chi-instana-local-0-0 ClusterIP None <none> 9000/TCP,8123/TCP,9009/TCP 8m16s service/chi-instana-local-0-1 ClusterIP None <none> 9000/TCP,8123/TCP,9009/TCP 7m35s service/clickhouse-instana LoadBalancer 192.168.1.167 35.246.237.19 8123:32714/TCP,9000:31317/TCP 8m11s service/clickhouse-operator-altinity-clickhouse-operator-metrics ClusterIP 192.168.1.136 <none> 8888/TCP 70m service/instana-zookeeper-admin-server ClusterIP 192.168.1.126 <none> 8080/TCP 4h3m service/instana-zookeeper-client ClusterIP 192.168.1.13 <none> 2181/TCP 4h3m service/instana-zookeeper-headless ClusterIP None <none> 2181/TCP,2888/TCP,3888/TCP,7000/TCP,8080/TCP 4h3m NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/clickhouse-operator-altinity-clickhouse-operator 1/1 1 1 70m NAME DESIRED CURRENT READY AGE replicaset.apps/clickhouse-operator-altinity-clickhouse-operator-86b5f9b579 1 1 1 70m NAME READY AGE statefulset.apps/chi-instana-local-0-0 1/1 8m50s statefulset.apps/chi-instana-local-0-1 1/1 8m9s statefulset.apps/instana-zookeeper 3/3 4h3m