Creating a ClickHouseKeeper data store on Linux x86_64

Install the ClickHouse operator and set up Clickhouse-keeper.

When you install Instana for the first time, use Clickhouse-keeper instead of Zookeeper.

Before you begin

Make sure that the following prerequisites are met:

  • The online or offline host is prepared to pull images from the external repository.
  • The Helm repo is added.

For more information, see Preparing to install data store operators.

ClickHouse operator versions and image tags

The following images are needed for the pinned Helm chart or operator versions.

Table 1. Operator versions and image tags for deployment
Platform Operator versions Helm chart version Image with tag
Linux® x86_64 v1.2.0 v1.2.0 artifact-public.instana.io/clickhouse-operator:v1.2.0

artifact-public.instana.io/clickhouse-openssl:23.8.16.40-1-lts-ibm

Installing ClickHouseKeeper online

To install ClickHouse-keeper online, complete the following steps.

  1. Check whether the instana-clickhouse namespace exists in your cluster. If it does not exist, you can create it now.

    1. Check whether the instana-clickhouse namespace exists.
      kubectl get namespace | grep clickhouse
      
    2. If the instana-clickhouse namespace does not exist, create it now.
      kubectl create namespace instana-clickhouse
      
  2. Create image pull secrets for the ClickHouse image. Replace <download_key> with your own download key.

    kubectl create secret docker-registry instana-registry \
    --namespace=instana-clickhouse \
    --docker-username=_ \
    --docker-password=<download_key> \
    --docker-server=artifact-public.instana.io
    
  3. If you are using a Red Hat® OpenShift® cluster, create Security Context Constraints (SCC) before you deploy the ClickHouse Operator. Create a YAML file, for example clickhouse-scc.yaml, with the SecurityContextConstraints (SCC) definition.

    apiVersion: security.openshift.io/v1
    kind: SecurityContextConstraints
    metadata:
      name: clickhouse-scc
    runAsUser:
      type: MustRunAs
      uid: 1001
    seLinuxContext:
      type: RunAsAny
    fsGroup:
      type: RunAsAny
    allowHostDirVolumePlugin: false
    allowHostNetwork: true
    allowHostPorts: true
    allowPrivilegedContainer: false
    allowHostIPC: true
    allowHostPID: true
    readOnlyRootFilesystem: false
    users:
      - system:serviceaccount:instana-clickhouse:clickhouse-operator
      - system:serviceaccount:instana-clickhouse:clickhouse-operator-ibm-clickhouse-operator
      - system:serviceaccount:instana-clickhouse:default
    
  4. Create the SCC resource.

    kubectl apply -f clickhouse-scc.yaml
    
  5. Install the ClickHouse operator.

    helm install clickhouse-operator instana/ibm-clickhouse-operator -n instana-clickhouse --version=v1.2.0 --set operator.image.repository=artifact-public.instana.io/clickhouse-operator --set operator.image.tag=v1.2.0 --set imagePullSecrets[0].name="instana-registry"
    
  6. Create a YAML file, for example clickhouse_keeper.yaml, with the ClickHouseKeeperInstallation resource definition.

    apiVersion: "clickhouse-keeper.altinity.com/v1"
    kind: "ClickHouseKeeperInstallation"
    metadata:
      name: clickhouse-keeper
      namespace: instana-clickhouse
    spec:
      configuration:
        clusters:
          - name: "local"
            layout:
              replicasCount: 3
        settings:
            logger/level: "information"
            logger/console: "true"
            listen_host: "0.0.0.0"
            keeper_server/snapshot_storage_path: /var/lib/clickhouse-keeper/coordination/snapshots/store
            keeper_server/log_storage_path: /var/lib/clickhouse-keeper/coordination/logs/store
            keeper_server/storage_path: /var/lib/clickhouse-keeper/
            keeper_server/tcp_port: "2181"
            keeper_server/four_letter_word_white_list: "*"
            keeper_server/coordination_settings/raft_logs_level: "information"
            keeper_server/raft_configuration/server/port: "9444"
            prometheus/endpoint: "/metrics"
            prometheus/port: "7000"
            prometheus/metrics: "true"
            prometheus/events: "true"
            prometheus/asynchronous_metrics: "true"
            prometheus/status_info: "false"
            zookeeper/node/host: "localhost"
            zookeeper/node/port: "9181"
      templates:
        podTemplates:
          - name: clickhouse-keeper
            spec:
              containers:
                - name: clickhouse-keeper
                  imagePullPolicy: IfNotPresent
                  image: artifact-public.instana.io/clickhouse-openssl:23.8.16.40-1-lts-ibm
                  command:
                    - clickhouse-keeper
                    - --config-file=/etc/clickhouse-keeper/keeper_config.xml
                  resources:
                    requests:
                      memory: "1Gi"
              imagePullSecrets:
                - name: instana-registry
              securityContext:
                fsGroup: 0
                runAsGroup: 0
                runAsUser: 1001
              initContainers:
                - name: server-id-injector
                  imagePullPolicy: IfNotPresent
                  image:  artifact-public.instana.io/clickhouse-openssl:23.8.16.40-1-lts-ibm
        volumeClaimTemplates:
          - name: log-storage-path
            spec:
              storageClassName: <storage_class_name>
              accessModes:
                - ReadWriteOnce
              resources:
                requests:
                  storage: 20Gi
          - name: snapshot-storage-path
            spec:
              storageClassName: <storage_class_name>
              accessModes:
                - ReadWriteOnce
              resources:
                requests:
                  storage: 20Gi
    
  7. Complete the steps in Deploying and verifying ClickHouse-keeper (online and offline).

Installing ClickHouseKeeper offline

Install the ClickHouse operator in an air-gapped environment.

If you didn't pull the ClickHouse images from the external registry when you prepared for installation, you can pull them now. Run the following commands on your bastion host. Then, copy the images to your Instana host that is in an air-gapped environment.

docker pull artifact-public.instana.io/clickhouse-operator:v1.2.0
docker pull artifact-public.instana.io/clickhouse-openssl:23.8.16.40-1-lts-ibm

To install the ClickHouse operator in an air-gapped environment, complete the following steps on your Instana host.

  1. Retag the images to your internal image registry.

    docker tag artifact-public.instana.io/clickhouse-operator:v1.2.0 <internal-image-registry>/clickhouse-operator:v1.2.0
    docker tag artifact-public.instana.io/clickhouse-openssl:23.8.16.40-1-lts-ibm <internal-image-registry>/clickhouse-openssl:23.8.16.40-1-lts-ibm
    
  2. Push the images to your internal image registry.

    docker push <internal-image-registry>/clickhouse-operator:v1.2.0
    docker push <internal-image-registry>/clickhouse-openssl:23.8.16.40-1-lts-ibm
    
  3. Create a YAML file, for example clickhouse-scc.yaml, with the SecurityContextConstraints (SCC) definition.

    apiVersion: security.openshift.io/v1
    kind: SecurityContextConstraints
    metadata:
      name: clickhouse-scc
    runAsUser:
      type: MustRunAs
      uid: 1001
    seLinuxContext:
      type: RunAsAny
    fsGroup:
      type: RunAsAny
    allowHostDirVolumePlugin: false
    allowHostNetwork: true
    allowHostPorts: true
    allowPrivilegedContainer: false
    allowHostIPC: true
    allowHostPID: true
    readOnlyRootFilesystem: false
    users:
      - system:serviceaccount:instana-clickhouse:clickhouse-operator
      - system:serviceaccount:instana-clickhouse:clickhouse-operator-ibm-clickhouse-operator
      - system:serviceaccount:instana-clickhouse:default
    
  4. Create the SCC resource.

    kubectl apply -f clickhouse-scc.yaml
    
  5. Check whether the instana-clickhouse namespace exists in your cluster. If it does not exist, you can create it now.

    1. Check whether the instana-clickhouse namespace exists.
      kubectl get namespace | grep clickhouse
      
    2. If the instana-clickhouse namespace does not exist, create it now.
      kubectl create namespace instana-clickhouse
      
  6. If your internal image registry needs authentication, create an image pull secret.

    kubectl create secret docker-registry <secret_name> --namespace instana-clickhouse \
    --docker-username=<registry_username> \
    --docker-password=<registry_password> \
    --docker-server=<internal-image-registry>:<internal-image-registry-port> \
    --docker-email=<registry_email>
    
  7. Install the ClickHouse operator.

    helm install clickhouse-operator ibm-clickhouse-operator-v1.2.0.tgz -n instana-clickhouse --version=v1.2.0 --set operator.image.repository=<internal-image-registry>/clickhouse-operator --set operator.image.tag=v1.2.0
    
  8. Create a YAML file, for example clickhouse_keeper.yaml, with the "ClickHouseKeeperInstallation resource definition.

     apiVersion: "clickhouse-keeper.altinity.com/v1"
     kind: "ClickHouseKeeperInstallation"
     metadata:
       name: clickhouse-keeper
       namespace: instana-clickhouse
     spec:
       configuration:
         clusters:
           - name: "local"
             layout:
               replicasCount: 3
         settings:
             logger/level: "information"
             logger/console: "true"
             listen_host: "0.0.0.0"
             keeper_server/snapshot_storage_path: /var/lib/clickhouse-keeper/coordination/snapshots/store
             keeper_server/log_storage_path: /var/lib/clickhouse-keeper/coordination/logs/store
             keeper_server/storage_path: /var/lib/clickhouse-keeper/
             keeper_server/tcp_port: "2181"
             keeper_server/four_letter_word_white_list: "*"
             keeper_server/coordination_settings/raft_logs_level: "information"
             keeper_server/raft_configuration/server/port: "9444"
             prometheus/endpoint: "/metrics"
             prometheus/port: "7000"
             prometheus/metrics: "true"
             prometheus/events: "true"
             prometheus/asynchronous_metrics: "true"
             prometheus/status_info: "false"
             zookeeper/node/host: "localhost"
             zookeeper/node/port: "9181"
       templates:
         podTemplates:
           - name: clickhouse-keeper
             spec:
               containers:
                 - name: clickhouse-keeper
                   imagePullPolicy: IfNotPresent
                   image: artifact-public.instana.io/clickhouse-openssl:23.8.16.40-1-lts-ibm
                   command:
                     - clickhouse-keeper
                     - --config-file=/etc/clickhouse-keeper/keeper_config.xml
                   resources:
                     requests:
                       memory: "1Gi"
               imagePullSecrets:
                 - name: instana-registry
               securityContext:
                 fsGroup: 0
                 runAsGroup: 0
                 runAsUser: 1001
               initContainers:
                 - name: server-id-injector
                   imagePullPolicy: IfNotPresent
                   image:  artifact-public.instana.io/clickhouse-openssl:23.8.16.40-1-lts-ibm
         volumeClaimTemplates:
           - name: log-storage-path
             spec:
               storageClassName: <storage_class_name>
               accessModes:
                 - ReadWriteOnce
               resources:
                 requests:
                   storage: 20Gi
           - name: snapshot-storage-path
             spec:
               storageClassName: <storage_class_name>
               accessModes:
                 - ReadWriteOnce
               resources:
                 requests:
                   storage: 20Gi
    
  9. Complete the steps in Deploying and verifying ClickHouseKeeper (online and offline).

Deploying and verifying ClickHouseKeeper (online and offline)

To deploy the ClickHouseKeeper instance and create the data store, complete the following steps.

  1. Deploy ClickHouseKeeper.

    kubectl apply -f clickhouse_keeper.yaml -n instana-clickhouse
    
  2. Verify the ClickHouse operator and ClickhouseKeeper deployment.

    kubectl get all -n instana-clickhouse
    

    If the ClickHouse operator is deployed successfully, the command output shows the operator status as Running as shown in the following example:

     NAME                                                               READY   STATUS    RESTARTS   AGE
     pod/clickhouse-keeper-0                                            1/1     Running   0          2d7h
     pod/clickhouse-keeper-1                                            1/1     Running   0          2d8h
     pod/clickhouse-keeper-2                                            1/1     Running   0          2d8h
     pod/clickhouse-operator-ibm-clickhouse-operator-7754679687-bpwlx   1/1     Running   0          2d8h
    
     NAME                                                          TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)                      AGE
     service/clickhouse-keeper                                     ClusterIP   172.30.115.37    <none>        2181/TCP,7000/TCP            13d
     service/clickhouse-keeper-headless                            ClusterIP   None             <none>        9444/TCP                     13d
    
     NAME                                                          READY   UP-TO-DATE   AVAILABLE   AGE
     deployment.apps/clickhouse-operator-ibm-clickhouse-operator   1/1     1            1           79d
    
     NAME                                                                     DESIRED   CURRENT   READY   AGE
     replicaset.apps/clickhouse-operator-ibm-clickhouse-operator-7754679687   1         1         1       79d
    
     NAME                                     READY   AGE
     statefulset.apps/clickhouse-keeper       3/3     13d