Creating a ZooKeeper data store on Linux on IBM Z and LinuxONE

Install the ZooKeeper operator and set up the data store.

Before you begin

Make sure that you prepared your online and offline host to pull images from the external repository. Also, ensure that the correct Helm repo is added.

For more information, see Preparing for data store installation.

ZooKeeper operator versions and image tags for deployment

The following images are needed for the pinned Helm chart or operator versions.

Table 1. ZooKeeper Operator versions and image tags
Platform Operator versions Helm chart version Image with tag
Linux® on IBM Z® and LinuxONE 0.2.15 1.0.0 artifact-public.instana.io/self-hosted-images/3rd-party/operator/zookeeper:0.2.15_v0.18.0

artifact-public.instana.io/self-hosted-images/3rd-party/datastore/zookeeper:3.9.3_v0.15.0

artifact-public.instana.io/self-hosted-images/k8s/kubectl:v1.33.0_v0.5.0

Installing ZooKeeper online

Complete these steps to install the ZooKeeper data store.

  1. Create the instana-zookeeper namespace:

    kubectl create namespace instana-zookeeper
    
  2. Create image pull secrets for the instana-zookeeper namespace. Update the <download_key> value with your own download key.

    kubectl create secret docker-registry instana-registry --namespace instana-zookeeper \
      --docker-username=_ \
      --docker-password=<download_key> \
      --docker-server=artifact-public.instana.io
    
  3. Create custom_values.yaml and specify the toleration and affinity. Skip this step if the file is already created.

    tolerations:
    - key: node.instana.io/monitor
      operator: Equal
      effect: NoSchedule
      value: "true"
    affinity:
      nodeAffinity:
        requiredDuringSchedulingIgnoredDuringExecution:
          nodeSelectorTerms:
            - matchExpressions:
                - key: node-role.kubernetes.io/monitor
                  operator: In
                  values:
                  - "true"
    
  4. Install the ZooKeeper operator.

    helm install zookeeper-operator instana/zookeeper-operator -n instana-zookeeper --version=1.0.0 --set image.repository=artifact-public.instana.io/self-hosted-images/3rd-party/operator/zookeeper  --set image.tag=0.2.15_v0.18.0 --set global.imagePullSecrets[0]=instana-registry --no-hooks -f custom_values.yaml
    
  5. To install the ZooKeeper instances, you need the instana-clickhouse namespace. Check whether the instana-clickhouse namespace exists in your cluster. If it does not exist, you can create it now.

    1. Check whether the instana-clickhouse namespace exists.
      kubectl get namespace | grep clickhouse
      
    2. If the instana-clickhouse namespace does not exist, create it now.
      kubectl create namespace instana-clickhouse
      
  6. Create image pull secrets for the instana-clickhouse namespace. Update the <download_key> value with your own download key.

    kubectl create secret docker-registry instana-registry \
      --namespace=instana-clickhouse \
      --docker-username=_ \
      --docker-password=<download_key> \
      --docker-server=artifact-public.instana.io
    
  7. Create a YAML file, for example zookeeper.yaml, with the ZooKeeper configuration.

     apiVersion: "zookeeper.pravega.io/v1beta1"
     kind: "ZookeeperCluster"
     metadata:
       name: "instana-zookeeper"
     spec:
       # For parameters and default values, see https://github.com/pravega/zookeeper-operator/tree/master/charts/zookeeper#configuration
       replicas: 3
       image:
         repository: artifact-public.instana.io/self-hosted-images/3rd-party/datastore/zookeeper
         tag: 3.9.3_v0.15.0
       pod:
         imagePullSecrets: [name: "instana-registry"]
         serviceAccountName: "zookeeper"
         tolerations:
         - key: node.instana.io/monitor
           operator: Equal
           effect: NoSchedule
           value: "true"
         affinity:
           nodeAffinity:
             requiredDuringSchedulingIgnoredDuringExecution:
               nodeSelectorTerms:
                 - matchExpressions:
                     - key: node-role.kubernetes.io/monitor
                       operator: In
                       values:
                       - "true"
         # Add the following securityContext snippet for Kubernetes offerings other than OCP.
         # securityContext:
         #   runAsUser: 1000
         #   fsGroup: 1000
       config:
         tickTime: 2000
         initLimit: 10
         syncLimit: 5
         maxClientCnxns: 0
         autoPurgeSnapRetainCount: 20
         autoPurgePurgeInterval: 1
       persistence:
         reclaimPolicy: Delete
         spec:
           resources:
             requests:
               storage: "10Gi"
    
  8. Complete the steps in Deploying and verifying ZooKeeper (online and offline).

Install ZooKeeper offline

If you didn't yet pull the ClickHouse images from the external registry when you prepared for installation, you can pull them now. Run the following commands on your bastion host. Then, copy the images to your Instana host that is in your air-gapped environment.

docker pull artifact-public.instana.io/self-hosted-images/3rd-party/operator/zookeeper:0.2.15_v0.18.0
docker pull artifact-public.instana.io/self-hosted-images/3rd-party/datastore/zookeeper:3.9.3_v0.15.0
docker pull artifact-public.instana.io/self-hosted-images/k8s/kubectl:v1.33.0_v0.5.0

Complete the following steps on your Instana host.

  1. Retag the images to your internal image registry.

    docker tag artifact-public.instana.io/self-hosted-images/3rd-party/operator/zookeeper:0.2.15_v0.18.0 <internal-image-registry>/zookeeper/zookeeper-operator:0.2.15_v0.18.0
    docker tag artifact-public.instana.io/self-hosted-images/3rd-party/datastore/zookeeper:3.9.3_v0.15.0 <internal-image-registry>/zookeeper/zookeeper:3.9.3_v0.15.0
    docker tag artifact-public.instana.io/self-hosted-images/k8s/kubectl:v1.33.0_v0.5.0 <internal-image-registry>/self-hosted-images/k8s/kubectl:v1.33.0_v0.5.0
    
  2. Push the images to your internal image registry on your bastion host.

    docker push <internal-image-registry>/operator/zookeeper:0.2.15_v0.18.0
    docker push <internal-image-registry>/zookeeper:3.9.3_v0.15.0
    docker push <internal-image-registry>/self-hosted-images/k8s/kubectl:v1.33.0_v0.5.0
    
  3. Create the instana-zookeeper namespace.

    kubectl create namespace instana-zookeeper
    
  4. Optional: Create an image pull secret if your internal image registry needs authentication.

    kubectl create secret docker-registry <secret_name> --namespace instana-zookeeper \
    --docker-username=<registry_username> \
    --docker-password=<registry_password> \
    --docker-server=<internal-image-registry>:<internal-image-registry-port> \
    --docker-email=<registry_email>
    
  5. Install the ZooKeeper operator. If you created an image pull secret in the previous step, add --set imagePullSecrets[0].name="<internal-image-registry-pull-secret>" to the following command.

    helm install zookeeper-operator zookeeper-operator-1.0.0.tgz -n instana-zookeeper --version=1.0.0 --set image.repository=<internal-image-registry>/operator/zookeeper --set image.tag=0.2.15_v0.18.0 --no-hooks -f custom_values.yaml
    
  6. To install the ZooKeeper instances, you need the instana-clickhouse namespace. Check whether the instana-clickhouse namespace exists in your cluster. If it does not exist, you can create it now.

    1. Check whether the instana-clickhouse namespace exists.
      kubectl get namespace | grep clickhouse
      
    2. If the instana-clickhouse namespace does not exist, create it now.
      kubectl create namespace instana-clickhouse
      
  7. Create a YAML file, for example zookeeper.yaml, with the ZooKeeper configuration:

     apiVersion: "zookeeper.pravega.io/v1beta1"
     kind: "ZookeeperCluster"
     metadata:
       name: "instana-zookeeper"
     spec:
       # For parameters and default values, see https://github.com/pravega/zookeeper-operator/tree/master/charts/zookeeper#configuration
       replicas: 3
       image:
         repository: <internal-image-registry>/zookeeper
         tag: 3.9.3_v0.15.0
       pod:
         imagePullSecrets: [name: "<internal-registry-secret>"]
         serviceAccountName: "zookeeper"
         tolerations:
         - key: node.instana.io/monitor
           operator: Equal
           effect: NoSchedule
           value: "true"
         affinity:
           nodeAffinity:
             requiredDuringSchedulingIgnoredDuringExecution:
               nodeSelectorTerms:
                 - matchExpressions:
                     - key: node-role.kubernetes.io/monitor
                       operator: In
                       values:
                       - "true"
         # Add the following securityContext snippet for Kubernetes offerings other than OCP.
         # securityContext:
         #   runAsUser: 1000
         #   fsGroup: 1000
       config:
         tickTime: 2000
         initLimit: 10
         syncLimit: 5
         maxClientCnxns: 0
         autoPurgeSnapRetainCount: 20
         autoPurgePurgeInterval: 1
       persistence:
         reclaimPolicy: Delete
         spec:
           resources:
             requests:
               storage: "10Gi"
    
  8. Complete the steps in Deploying and verifying ZooKeeper (online and offline).

Deploying and verifying ZooKeeper (online and offline)

Complete these steps to deploy the ZooKeeper instance and create the data store.

  1. Deploy ZooKeeper.

    kubectl apply -f zookeeper.yaml -n instana-clickhouse
    

    ClickHouse needs a dedicated ZooKeeper cluster, so ZooKeeper is installed in the instana-clickhouse namespace.

  2. Check whether the ZooKeeper operator pod is running and it is scheduled on the desired nodes.

kubectl get pods -n instana-zookeeper -o wide

A sample output is shown in the following example:

NAME                                  READY   STATUS    RESTARTS   AGE   IP              NODE                                           NOMINATED NODE   READINESS GATES
zookeeper-operator-6dcd4bdd74-7jhpn   1/1     Running   1          24d   10.254.12.198   worker2.instana-odf5.cp.fyre.ibm.com   <none>           <none>

Verify whether ZooKeeper is working.

kubectl get all -n instana-clickhouse

See the following sample output.

NAME                      READY   STATUS    RESTARTS   AGE
pod/instana-zookeeper-0   1/1     Running   0          119s
pod/instana-zookeeper-1   1/1     Running   0          82s
pod/instana-zookeeper-2   1/1     Running   0          47s

NAME                                     TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)                                        AGE
service/instana-zookeeper-admin-server   ClusterIP   192.168.1.126   <none>        8080/TCP                                       2m
service/instana-zookeeper-client         ClusterIP   192.168.1.13    <none>        2181/TCP                                       2m
service/instana-zookeeper-headless       ClusterIP   None            <none>        2181/TCP,2888/TCP,3888/TCP,7000/TCP,8080/TCP   2m

NAME                                 READY   AGE
statefulset.apps/instana-zookeeper   3/3     2m3s