Creating a ClickHouse data store on Linux on IBM Z and LinuxONE

Install the ClickHouse operator and set up the data store.

After the initial creation of a ClickHouse cluster or after the number of shards are increased, the cluster might need a few minutes to attach the shards.

Before you begin

Make sure that you prepared your online and offline host to pull images from the external repository. Also, ensure that you added the Helm repo.

For more information, see Preparing to install data store operators.

ClickHouse operator versions and image tags

The following images are needed for the pinned Helm chart or operator versions.

Table 1. Operator versions and image tags for deployment
Platform Operator versions Helm chart version Image with tag
Linux® on IBM Z® and LinuxONE v0.1.2 v0.1.2 artifact-public.instana.io/clickhouse-operator:v0.1.2

artifact-public.instana.io/clickhouse-openssl:23.8.10.43-1-lts-ibm

Installing ClickHouse online

Complete these steps to install the ClickHouse data store.

  1. Check whether the instana-clickhouse namespace exists in your cluster. If it does not exist, you can create it now.

    1. Check whether the instana-clickhouse namespace exists.
      kubectl get namespace | grep clickhouse
      
    2. If the instana-clickhouse namespace does not exist, create it now.
      kubectl create namespace instana-clickhouse
      
  2. Create image pull secrets for the ClickHouse image. Update the <download_key> value with your own download key.

    kubectl create secret docker-registry instana-registry \
    --namespace=instana-clickhouse \
    --docker-username=_ \
    --docker-password=<download_key> \
    --docker-server=artifact-public.instana.io
    
  3. Create a YAML file, for example clickhouse-scc.yaml, with the SecurityContextConstraints (SCC) definition.

    apiVersion: security.openshift.io/v1
    kind: SecurityContextConstraints
    metadata:
      name: clickhouse-scc
    runAsUser:
      type: MustRunAs
      uid: 1001
    seLinuxContext:
      type: RunAsAny
    fsGroup:
      type: RunAsAny
    allowHostDirVolumePlugin: false
    allowHostNetwork: true
    allowHostPorts: true
    allowPrivilegedContainer: false
    allowHostIPC: true
    allowHostPID: true
    readOnlyRootFilesystem: false
    users:
      - system:serviceaccount:instana-clickhouse:clickhouse-operator
      - system:serviceaccount:instana-clickhouse:clickhouse-operator-ibm-clickhouse-operator
      - system:serviceaccount:instana-clickhouse:default
    
  4. Create the SCC resource.

    kubectl apply -f clickhouse-scc.yaml
    
  5. Install the ClickHouse operator.

    helm install clickhouse-operator instana/ibm-clickhouse-operator -n instana-clickhouse --version=v0.1.2 --set operator.image.repository=artifact-public.instana.io/clickhouse-operator --set operator.image.tag=v0.1.2 --set imagePullSecrets[0].name="instana-registry"
    
  6. Decide a password for the ClickHouse user. In a later step, you create a ClickHouseInstallation custom resource (CR) in which you must specify the ClickHouse user password in the spec.configuration.users section. You can use either a clear-text password, or an SHA256 or SHA1 hash password.

    • SHA256 hash

      If you want to use an SHA256 hash password, complete the following steps:

      1. Generate a random password and its corresponding SHA256 hash.

        PASSWORD=$(base64 < /dev/urandom | head -c16); echo "Password: $PASSWORD"; HEX=$(echo -n "$PASSWORD" | sha256sum | tr -d '-'); echo "SHA256: $HEX"
        
      2. In the ClickHouseInstallation CR, replace clickhouse-user/password: "clickhouse-pass" with clickhouse-user/password_sha256_hex: "<SHA256_HEX>". See the following sample code:

        spec:
          configuration:
            users:
              clickhouse-user/password_sha256_hex: "<SHA256_HEX>"
        
    • SHA1 hash

      If you want to use an SHA1 hash password, complete the following steps:

      1. Generate a random password and its corresponding SHA1 hash.

        PASSWORD=$(base64 < /dev/urandom | head -c8); echo "$PASSWORD"; echo -n "$PASSWORD" | sha1sum | tr -d '-' | xxd -r -p | sha1sum | tr -d '-'
        
      2. In the ClickHouseInstallation CR, replace clickhouse-user/password: "clickhouse-pass" with clickhouse-user/password_double_sha1_hex: "<SHA1_HEX>". See the following sample code:

        spec:
          configuration:
            users:
              clickhouse-user/password_double_sha1_hex: "<SHA1_HEX>"
        
  7. Create a YAML file, for example clickhouse_installation.yaml, with the ClickHouseInstallation resource definition. Update the zookeeper.nodes.host field with the hostname of your ZooKeeper cluster. If you are using a hash as a password, update the password as shown in the previous step.

    apiVersion: "clickhouse.altinity.com/v1"
    kind: "ClickHouseInstallation"
    metadata:
      name: "instana"
    spec:
      defaults:
        templates:
          dataVolumeClaimTemplate: instana-clickhouse-data-volume
          logVolumeClaimTemplate: instana-clickhouse-log-volume
          serviceTemplate: service-template
      configuration:
        files:
          config.d/storage.xml: |
            <clickhouse>
              <storage_configuration>
                <disks>
                  <default/>
                  <cold_disk>
                    <path>/var/lib/clickhouse-cold/</path>
                  </cold_disk>
                </disks>
                <policies>
                  <logs_policy>
                    <volumes>
                      <data>
                        <disk>default</disk>
                      </data>
                      <cold>
                        <disk>cold_disk</disk>
                      </cold>
                    </volumes>
                  </logs_policy>
                  <logs_policy_v4>
                    <volumes>
                      <tier1>
                        <disk>default</disk>
                      </tier1>
                      <tier2>
                        <disk>cold_disk</disk>
                      </tier2>
                    </volumes>
                  </logs_policy_v4>
                </policies>
              </storage_configuration>
            </clickhouse>
        clusters:
          - name: local
            templates:
              podTemplate: clickhouse
            layout:
              shardsCount: 1
              replicasCount: 2 # The replication count of 2 is fixed for Instana backend installations
            schemaPolicy:
              replica: None
              shard: None
        zookeeper:
          nodes:
            - host: instana-zookeeper-headless.instana-clickhouse
        profiles:
          default/max_memory_usage: 10000000000 # If memory limits are set, this value must be adjusted according to the limits.
          default/joined_subquery_requires_alias: 0
          default/max_execution_time: 100
          default/max_query_size: 1048576
          default/use_uncompressed_cache: 0
          default/enable_http_compression: 1
          default/load_balancing: random
          default/background_pool_size: 32
          default/background_schedule_pool_size: 32
          default/distributed_directory_monitor_split_batch_on_failure: 1
          default/distributed_directory_monitor_batch_inserts: 1
          default/insert_distributed_sync: 1
          default/log_queries: 1
          default/log_query_views: 1
          default/max_threads: 16
          default/allow_experimental_database_replicated: 1
        quotas:
          default/interval/duration: 3600
          default/interval/queries: 0
          default/interval/errors: 0
          default/interval/result_rows: 0
          default/interval/read_rows: 0
          default/interval/execution_time: 0
        settings:
          remote_servers/all-sharded/secret: clickhouse-default-pass
          remote_servers/all-replicated/secret: clickhouse-default-pass
          remote_servers/local/secret: clickhouse-default-pass
          max_concurrent_queries: 200
          max_table_size_to_drop: 0
          max_partition_size_to_drop: 0
        users:
          default/password: "clickhouse-default-pass"
          clickhouse-user/networks/ip: "::/0"
          clickhouse-user/password: "clickhouse-pass"
          # Or
          # Generate password and the corresponding SHA256 hash with:
          # $ PASSWORD=$(base64 < /dev/urandom | head -c8); echo "$PASSWORD"; echo -n "$PASSWORD" | sha256sum | tr -d '-'
          # 6edvj2+d                                                          <- first line is the password
          # a927723f4a42cccc50053e81bab1fcf579d8d8fb54a3ce559d42eb75a9118d65  <- second line is the corresponding SHA256 hash
          # clickhouse-user/password_sha256_hex: "a927723f4a42cccc50053e81bab1fcf579d8d8fb54a3ce559d42eb75a9118d65"
          # Or
          # Generate password and the corresponding SHA1 hash with:
          # $ PASSWORD=$(base64 < /dev/urandom | head -c8); echo "$PASSWORD"; echo -n "$PASSWORD" | sha1sum | tr -d '-' | xxd -r -p | sha1sum | tr -d '-'
          # LJfoOfxl                                  <- first line is the password, put this in the k8s secret
          # 3435258e803cefaab7db2201d04bf50d439f6c7f  <- the corresponding double SHA1 hash, put this below
          # clickhouse-user/password_double_sha1_hex: "3435258e803cefaab7db2201d04bf50d439f6c7f"
      templates:
        podTemplates:
          - name: clickhouse
            spec:
              containers:
                - name: instana-clickhouse
                  image: artifact-public.instana.io/clickhouse-openssl:23.8.10.43-1-lts-ibm
                  command:
                    - clickhouse-server
                    - --config-file=/etc/clickhouse-server/config.xml
                  volumeMounts:
                    - mountPath: /var/lib/clickhouse-cold/
                      name: instana-clickhouse-data-cold-volume
                - name: clickhouse-log
                  image: artifact-public.instana.io/clickhouse-openssl:23.8.10.43-1-lts-ibm
                  args:
                    - while true; do sleep 30; done;
                  command:
                    - /bin/sh
                    - -c
                    - --
              imagePullSecrets:
                - name: instana-registry
              securityContext:
                fsGroup: 0
                runAsGroup: 0
                runAsUser: 1001
            # Optional - uncomment the following lines if resources need to be specifically defined for the clickhouse pods. The following values are for example only.
            # resources:
            #   limits:
            #     cpu: "4"
            #     memory: 4Gi
            #   requests:
            #     cpu: "1"
            #     memory: 2Gi
        volumeClaimTemplates:
          - name: instana-clickhouse-data-volume
            spec:
              accessModes:
                - ReadWriteOnce
              resources:
                requests:
                  storage: 100Gi
          - name: instana-clickhouse-log-volume
            spec:
              accessModes:
                - ReadWriteOnce
              resources:
                requests:
                  storage: 1Gi
          - name: instana-clickhouse-data-cold-volume
            spec:
              accessModes:
                - ReadWriteOnce
              resources:
                requests:
                  storage: 100Gi
        serviceTemplates:
          - name: service-template
            generateName: "clickhouse-{chi}"
            spec:
              ports:
                - name: http
                  port: 8123
                - name: tcp
                  port: 9000
              type: ClusterIP
    
  8. Complete the steps in Deploying and verifying ClickHouse (online and offline).

Install ClickHouse offline

If you didn't yet pull the ClickHouse images from the external registry when you prepared for installation, you can pull them now. Run the following commands on your bastion host. Then, copy the images to your Instana host that is in your air-gapped environment.

docker pull artifact-public.instana.io/clickhouse-operator:v0.1.2
docker pull artifact-public.instana.io/clickhouse-openssl:23.8.10.43-1-lts-ibm

Complete the following steps on your Instana host.

  1. Retag the images to your internal image registry.

    docker tag artifact-public.instana.io/clickhouse-operator:v0.1.2 <internal-image-registry>/clickhouse-operator:v0.1.2
    docker tag artifact-public.instana.io/clickhouse-openssl:23.8.10.43-1-lts-ibm <internal-image-registry>/clickhouse-openssl:23.8.10.43-1-lts-ibm
    
  2. Push the images to your internal image registry.

    docker push <internal-image-registry>/clickhouse-operator:v0.1.2
    docker push <internal-image-registry>/clickhouse-openssl:23.8.10.43-1-lts-ibm
    
  3. Create a YAML file, for example clickhouse-scc.yaml, with the SecurityContextConstraints (SCC) definition.

    apiVersion: security.openshift.io/v1
    kind: SecurityContextConstraints
    metadata:
      name: clickhouse-scc
    runAsUser:
      type: MustRunAs
      uid: 1001
    seLinuxContext:
      type: RunAsAny
    fsGroup:
      type: RunAsAny
    allowHostDirVolumePlugin: false
    allowHostNetwork: true
    allowHostPorts: true
    allowPrivilegedContainer: false
    allowHostIPC: true
    allowHostPID: true
    readOnlyRootFilesystem: false
    users:
      - system:serviceaccount:instana-clickhouse:clickhouse-operator
      - system:serviceaccount:instana-clickhouse:clickhouse-operator-ibm-clickhouse-operator
      - system:serviceaccount:instana-clickhouse:default
    
  4. Create the SCC resource.

    kubectl apply -f clickhouse-scc.yaml
    
  5. Check whether the instana-clickhouse namespace exists in your cluster. If it does not exist, you can create it now.

    1. Check whether the instana-clickhouse namespace exists.
      kubectl get namespace | grep clickhouse
      
    2. If the instana-clickhouse namespace does not exist, create it now.
      kubectl create namespace instana-clickhouse
      
  6. Optional: Create an image pull secret if your internal image registry needs authentication.

    kubectl create secret docker-registry <secret_name> --namespace instana-clickhouse \
    --docker-username=<registry_username> \
    --docker-password=<registry_password> \
    --docker-server=<internal-image-registry>:<internal-image-registry-port> \
    --docker-email=<registry_email>
    
  7. Install ClickHouse operator. If you created an image pull secret in the previous step, add --set imagePullSecrets[0].name="<internal-image-registry-pull-secret>" to the following command.

    helm install clickhouse-operator ibm-clickhouse-operator-v0.1.2.tgz -n instana-clickhouse --version=v0.1.2 --set operator.image.repository=<internal-image-registry>/clickhouse-operator --set operator.image.tag=v0.1.2
    
  8. Decide a password for the ClickHouse user. In a later step, you create a ClickHouseInstallation custom resource (CR) in which you must specify the ClickHouse user password in the spec.configuration.users section. You can use either a clear-text password, or a SHA256 or SHA1 hash password.

    • SHA256 hash

      If you want to use an SHA256 hash password, complete the following steps:

      1. Generate a random password and its corresponding SHA256 hash.

        PASSWORD=$(base64 < /dev/urandom | head -c16); echo "Password: $PASSWORD"; HEX=$(echo -n "$PASSWORD" | sha256sum | tr -d '-'); echo "SHA256: $HEX"
        
      2. In the ClickHouseInstallation CR, replace clickhouse-user/password: "clickhouse-pass" with clickhouse-user/password_sha256_hex: "<SHA256_HEX>". See the following sample code:

        spec:
          configuration:
            users:
              clickhouse-user/password_sha256_hex: "<SHA256_HEX>"
        
    • SHA1 hash

      If you want to use an SHA1 hash password, complete the following steps:

      1. Generate a random password and its corresponding SHA256 hash.

        PASSWORD=$(base64 < /dev/urandom | head -c8); echo "$PASSWORD"; echo -n "$PASSWORD" | sha1sum | tr -d '-' | xxd -r -p | sha1sum | tr -d '-'
        
      2. In the ClickHouseInstallation CR, replace clickhouse-user/password: "clickhouse-pass" with clickhouse-user/password_double_sha1_hex: "<SHA1_HEX>". See the following sample code:

        spec:
          configuration:
            users:
              clickhouse-user/password_double_sha1_hex: "<SHA1_HEX>"
        
  9. Create a YAML file, for example clickhouse_installation.yaml, with the ClickHouseInstallation resource definition. Update the zookeeper.nodes.host field with the hostname of your ZooKeeper cluster. If you are using a hash as a password, update the password as shown in the previous step.

    apiVersion: "clickhouse.altinity.com/v1"
    kind: "ClickHouseInstallation"
    metadata:
      name: "instana"
    spec:
      defaults:
        templates:
          dataVolumeClaimTemplate: instana-clickhouse-data-volume
          logVolumeClaimTemplate: instana-clickhouse-log-volume
          serviceTemplate: service-template
      configuration:
        files:
          config.d/storage.xml: |
            <clickhouse>
              <storage_configuration>
                <disks>
                  <default/>
                  <cold_disk>
                    <path>/var/lib/clickhouse-cold/</path>
                  </cold_disk>
                </disks>
                <policies>
                  <logs_policy>
                    <volumes>
                      <data>
                        <disk>default</disk>
                      </data>
                      <cold>
                        <disk>cold_disk</disk>
                      </cold>
                    </volumes>
                  </logs_policy>
                  <logs_policy_v4>
                    <volumes>
                      <tier1>
                        <disk>default</disk>
                      </tier1>
                      <tier2>
                        <disk>cold_disk</disk>
                      </tier2>
                    </volumes>
                  </logs_policy_v4>
                </policies>
              </storage_configuration>
            </clickhouse>
        clusters:
          - name: local
            templates:
              podTemplate: clickhouse
            layout:
              shardsCount: 1
              replicasCount: 2 # The replication count of 2 is fixed for Instana backend installations
            schemaPolicy:
              replica: None
              shard: None
        zookeeper:
          nodes:
            - host: instana-zookeeper-headless.instana-clickhouse
        profiles:
          default/max_memory_usage: 10000000000 # If memory limits are set, this value must be adjusted according to the limits.
          default/joined_subquery_requires_alias: 0
          default/max_execution_time: 100
          default/max_query_size: 1048576
          default/use_uncompressed_cache: 0
          default/enable_http_compression: 1
          default/load_balancing: random
          default/background_pool_size: 32
          default/background_schedule_pool_size: 32
          default/distributed_directory_monitor_split_batch_on_failure: 1
          default/distributed_directory_monitor_batch_inserts: 1
          default/insert_distributed_sync: 1
          default/log_queries: 1
          default/log_query_views: 1
          default/max_threads: 16
          default/allow_experimental_database_replicated: 1
        quotas:
          default/interval/duration: 3600
          default/interval/queries: 0
          default/interval/errors: 0
          default/interval/result_rows: 0
          default/interval/read_rows: 0
          default/interval/execution_time: 0
        settings:
          remote_servers/all-sharded/secret: clickhouse-default-pass
          remote_servers/all-replicated/secret: clickhouse-default-pass
          remote_servers/local/secret: clickhouse-default-pass
          max_concurrent_queries: 200
          max_table_size_to_drop: 0
          max_partition_size_to_drop: 0
        users:
          default/password: "clickhouse-default-pass"
          clickhouse-user/networks/ip: "::/0"
          clickhouse-user/password: "clickhouse-pass"
          # Or
          # Generate password and the corresponding SHA256 hash with:
          # $ PASSWORD=$(base64 < /dev/urandom | head -c8); echo "$PASSWORD"; echo -n "$PASSWORD" | sha256sum | tr -d '-'
          # 6edvj2+d                                                          <- first line is the password
          # a927723f4a42cccc50053e81bab1fcf579d8d8fb54a3ce559d42eb75a9118d65  <- second line is the corresponding SHA256 hash
          # clickhouse-user/password_sha256_hex: "a927723f4a42cccc50053e81bab1fcf579d8d8fb54a3ce559d42eb75a9118d65"
          # Or
          # Generate password and the corresponding SHA1 hash with:
          # $ PASSWORD=$(base64 < /dev/urandom | head -c8); echo "$PASSWORD"; echo -n "$PASSWORD" | sha1sum | tr -d '-' | xxd -r -p | sha1sum | tr -d '-'
          # LJfoOfxl                                  <- first line is the password, put this in the k8s secret
          # 3435258e803cefaab7db2201d04bf50d439f6c7f  <- the corresponding double SHA1 hash, put this below
          # clickhouse-user/password_double_sha1_hex: "3435258e803cefaab7db2201d04bf50d439f6c7f"
      templates:
        podTemplates:
          - name: clickhouse
            spec:
              containers:
                - name: instana-clickhouse
                  image: <internal-image-registry>/clickhouse-openssl:23.8.10.43-1-lts-ibm
                  command:
                    - clickhouse-server
                    - --config-file=/etc/clickhouse-server/config.xml
                  volumeMounts:
                    - mountPath: /var/lib/clickhouse-cold/
                      name: instana-clickhouse-data-cold-volume
                - name: clickhouse-log
                  image: <internal-image-registry>/clickhouse-openssl:23.8.10.43-1-lts-ibm
                  args:
                    - while true; do sleep 30; done;
                  command:
                    - /bin/sh
                    - -c
                    - --
            # Optional: if you created an image pull secret for your internal registry, uncomment the following lines and update the image pull secret information.
            # imagePullSecrets:
            #   - name: <internal-image-registry-pull-secret>
              securityContext:
                fsGroup: 0
                runAsGroup: 0
                runAsUser: 1001
            # Optional - uncomment the following lines if resources need to be specifically defined for the clickhouse pods. The following values are for example only.
            # resources:
            #   limits:
            #     cpu: "4"
            #     memory: 4Gi
            #   requests:
            #     cpu: "1"
            #     memory: 2Gi
        volumeClaimTemplates:
          - name: instana-clickhouse-data-volume
            spec:
              accessModes:
                - ReadWriteOnce
              resources:
                requests:
                  storage: 100Gi
          - name: instana-clickhouse-log-volume
            spec:
              accessModes:
                - ReadWriteOnce
              resources:
                requests:
                  storage: 1Gi
          - name: instana-clickhouse-data-cold-volume
            spec:
              accessModes:
                - ReadWriteOnce
              resources:
                requests:
                  storage: 100Gi
        serviceTemplates:
          - name: service-template
            generateName: "clickhouse-{chi}"
            spec:
              ports:
                - name: http
                  port: 8123
                - name: tcp
                  port: 9000
              type: ClusterIP
    
  10. Complete the steps in Deploying and verifying ClickHouse (online and offline).

Deploying and verifying ClickHouse

Complete these steps to deploy the ClickHouse instance and create the data store.

  1. To deploy ClickHouse, run the following command:

    kubectl apply -f clickhouse_installation.yaml -n instana-clickhouse
    
  2. In the config.yaml file, store the ClickHouse user password.

     datastoreConfigs:
       ...
       clickhouseConfigs:
         - user: clickhouse-user
           password: <USER_GENERATED_PASSWORD>
           adminUser: clickhouse-user
           adminPassword: <USER_GENERATED_PASSWORD>
       ...
    
  3. If you want to configure the ClickHouse cluster in the core configuration, do not enter the load-balancer service of the entire cluster. Add just the individual node services by using the following scheme. Replica 1 and 2 on ascending to shard:

    spec:
    ...
      datastoreConfigs:
        clickhouseConfigs:
           - clusterName: local
             authEnabled: true
             hosts:
               - chi-instana-local-0-0-0.instana-clickhouse
               - chi-instana-local-0-1-0.instana-clickhouse
               - chi-instana-local-1-0-0.instana-clickhouse
               - chi-instana-local-1-1-0.instana-clickhouse
               - chi-instana-local-n-0-0.instana-clickhouse
               - chi-instana-local-n-1-0.instana-clickhouse
    ...
    
  4. Verify the ClickHouse operator deployment.

    kubectl get all -n instana-clickhouse
    

    If the ClickHouse operator is deployed successfully, the command output shows the operator status as Running as shown in the following example:

    NAME                                                                  READY   STATUS    RESTARTS   AGE
    pod/chi-instana-local-0-0-0                                           2/2     Running   0          8m46s
    pod/chi-instana-local-0-1-0                                           2/2     Running   0          8m5s
    pod/clickhouse-operator-altinity-clickhouse-operator-86b5f9b57689rr   2/2     Running   0          70m
    pod/instana-zookeeper-0                                               1/1     Running   0          4h3m
    pod/instana-zookeeper-1                                               1/1     Running   0          4h2m
    pod/instana-zookeeper-2                                               1/1     Running   0          4h2m
    
    NAME                                                               TYPE           CLUSTER-IP      EXTERNAL-IP     PORT(S)                                        AGE
    service/chi-instana-local-0-0                                      ClusterIP      None            <none>          9000/TCP,8123/TCP,9009/TCP                     8m16s
    service/chi-instana-local-0-1                                      ClusterIP      None            <none>          9000/TCP,8123/TCP,9009/TCP                     7m35s
    service/clickhouse-instana                                         LoadBalancer   192.168.1.167   35.246.237.19   8123:32714/TCP,9000:31317/TCP                  8m11s
    service/clickhouse-operator-altinity-clickhouse-operator-metrics   ClusterIP      192.168.1.136   <none>          8888/TCP                                       70m
    service/instana-zookeeper-admin-server                             ClusterIP      192.168.1.126   <none>          8080/TCP                                       4h3m
    service/instana-zookeeper-client                                   ClusterIP      192.168.1.13    <none>          2181/TCP                                       4h3m
    service/instana-zookeeper-headless                                 ClusterIP      None            <none>          2181/TCP,2888/TCP,3888/TCP,7000/TCP,8080/TCP   4h3m
    
    NAME                                                               READY   UP-TO-DATE   AVAILABLE   AGE
    deployment.apps/clickhouse-operator-altinity-clickhouse-operator   1/1     1            1           70m
    
    NAME                                                                          DESIRED   CURRENT   READY   AGE
    replicaset.apps/clickhouse-operator-altinity-clickhouse-operator-86b5f9b579   1         1         1       70m
    
    NAME                                     READY   AGE
    statefulset.apps/chi-instana-local-0-0   1/1     8m50s
    statefulset.apps/chi-instana-local-0-1   1/1     8m9s
    statefulset.apps/instana-zookeeper       3/3     4h3m