Creating a ZooKeeper data store on Linux x86_64

Install the ZooKeeper operator and set up the data store.

Before you begin

Make sure that you prepared your online and offline host to pull images from the external repository. Also, ensure that the correct Helm repo is added.

For more information, see Preparing for data store installation.

ZooKeeper operator versions and image tags for deployment

The following images are needed for the pinned Helm chart or operator versions.

Table 1. ZooKeeper Operator versions and image tags
Platform Operator versions Helm chart version Image with tag
Linux® x86_64 0.2.15 1.0.0 artifact-public.instana.io/self-hosted-images/3rd-party/operator/zookeeper:0.2.15_v0.12.0

artifact-public.instana.io/self-hosted-images/3rd-party/datastore/zookeeper:3.8.4_v0.13.0

artifact-public.instana.io/self-hosted-images/k8s/kubectl:v1.31.0_v0.1.0

Installing ZooKeeper online

Complete these steps to install the ZooKeeper data store.

  1. Create the instana-zookeeper namespace.

    kubectl create namespace instana-zookeeper
    
  2. Create image pull secrets for the instana-zookeeper namespace. Update the <download_key> value with your own download key.

    kubectl create secret docker-registry instana-registry --namespace instana-zookeeper \
      --docker-username=_ \
      --docker-password=<download_key> \
      --docker-server=artifact-public.instana.io
    
  3. Install the ZooKeeper operator.

    helm install zookeeper-operator -n instana-zookeeper instana/zookeeper-operator --create-namespace --version=1.0.0 --set image.repository=artifact-public.instana.io/self-hosted-images/3rd-party/operator/zookeeper --set image.tag=0.2.15_v0.12.0 --set global.imagePullSecrets={"instana-registry"}
    
  4. To install the ZooKeeper instances, you need the instana-clickhouse namespace. Check whether the instana-clickhouse namespace exists in your cluster. If it does not exist, you can create it now.

    1. Check whether the instana-clickhouse namespace exists.
      kubectl get namespace | grep clickhouse
      
    2. If the instana-clickhouse namespace does not exist, create it now.
      kubectl create namespace instana-clickhouse
      
  5. Create image pull secrets for the instana-clickhouse namespace. Before you create the secret, update the <download_key> value with your agent key.

    kubectl create secret docker-registry instana-registry \
      --namespace=instana-clickhouse \
      --docker-username=_ \
      --docker-password=<download_key> \
      --docker-server=artifact-public.instana.io
    
  6. Create a YAML file, for example zookeeper.yaml, with the ZooKeeper configuration:

    apiVersion: "zookeeper.pravega.io/v1beta1"
    kind: "ZookeeperCluster"
    metadata:
      name: "instana-zookeeper"
    spec:
      # For parameters and default values, see https://github.com/pravega/zookeeper-operator/tree/master/charts/zookeeper#configuration
      replicas: 3
      image:
        repository: artifact-public.instana.io/self-hosted-images/3rd-party/datastore/zookeeper
        tag: 3.8.4_v0.13.0
      pod:
        imagePullSecrets: [name: "instana-registry"]
        serviceAccountName: "zookeeper"
        # Add the following securityContext snippet for Kubernetes offerings other than OCP.
        # securityContext:
        #   runAsUser: 1000
        #   fsGroup: 1000
      config:
        tickTime: 2000
        initLimit: 10
        syncLimit: 5
        maxClientCnxns: 0
        autoPurgeSnapRetainCount: 20
        autoPurgePurgeInterval: 1
      persistence:
        reclaimPolicy: Delete
        spec:
          resources:
            requests:
              storage: "10Gi"
    
  7. Complete the steps in Deploying and verifying ZooKeeper (online and offline).

Installing ZooKeeper offline

If you didn't yet pull the ZooKeeper images from the external registry when you prepared for installation, you can pull them now. Run the following commands on your bastion host. Then, copy the images to your Instana host that is in your air-gapped environment.

docker pull artifact-public.instana.io/self-hosted-images/3rd-party/operator/zookeeper:0.2.15_v0.12.0
docker pull artifact-public.instana.io/self-hosted-images/3rd-party/datastore/zookeeper:3.8.4_v0.13.0
docker pull artifact-public.instana.io/self-hosted-images/k8s/kubectl:v1.31.0_v0.1.0

Complete the following steps on your Instana host.

  1. Retag the images to your internal image registry.

    docker tag artifact-public.instana.io/self-hosted-images/3rd-party/operator/zookeeper:0.2.15_v0.12.0 <internal-image-registry>/operator/zookeeper:0.2.15_v0.12.0
    docker tag artifact-public.instana.io/self-hosted-images/3rd-party/datastore/zookeeper:3.8.4_v0.13.0 <internal-image-registry>/zookeeper:3.8.4_v0.13.0
    docker tag artifact-public.instana.io/self-hosted-images/k8s/kubectl:v1.31.0_v0.1.0 <internal-image-registry>/self-hosted-images/k8s/kubectl:v1.31.0_v0.1.0
    
  2. Push the images to your internal image registry on your bastion host.

    docker push <internal-image-registry>/operator/zookeeper:0.2.15_v0.12.0
    docker push <internal-image-registry>/zookeeper:3.8.4_v0.13.0
    docker push <internal-image-registry>/self-hosted-images/k8s/kubectl:v1.31.0_v0.1.0
    
  3. Create the instana-zookeeper namespace.

    kubectl create namespace instana-zookeeper
    
  4. Optional: Create an image pull secret if your internal image registry needs authentication.

    kubectl create secret docker-registry <secret_name> --namespace instana-zookeeper \
    --docker-username=<registry_username> \
    --docker-password=<registry_password> \
    --docker-server=<internal-image-registry>:<internal-image-registry-port> \
    --docker-email=<registry_email>
    
  5. Install the ZooKeeper operator. If you created an image pull secret in the previous step, add --set imagePullSecrets[0].name="<internal-image-registry-pull-secret>" to the following command.

    helm install zookeeper-operator -n instana-zookeeper zookeeper-operator-1.0.0.tgz --version=1.0.0 --set image.repository=<internal-image-registry>/operator/zookeeper --set image.tag=0.2.15_v0.12.0
    
  6. To install the ZooKeeper instances, you need the instana-clickhouse namespace. Check whether the instana-clickhouse namespace exists in your cluster. If it does not exist, you can create it now.

    1. Check whether the instana-clickhouse namespace exists.
      kubectl get namespace | grep clickhouse
      
    2. If the instana-clickhouse namespace does not exist, create it now.
      kubectl create namespace instana-clickhouse
      
  7. Create a YAML file, for example zookeeper.yaml, with the ZooKeeper configuration:

    apiVersion: "zookeeper.pravega.io/v1beta1"
    kind: "ZookeeperCluster"
    metadata:
      name: "instana-zookeeper"
    spec:
      # For parameters and default values, see https://github.com/pravega/zookeeper-operator/tree/master/charts/zookeeper#configuration
      replicas: 3
      image:
        repository: <internal-image-registry>/zookeeper:3.8.4
        tag: 3.8.4_v0.13.0
      pod:
        imagePullSecrets: [name: "<internal-registry-secret>"]
        serviceAccountName: "zookeeper"
        # Add the following securityContext snippet for Kubernetes offerings other than OCP.
        # securityContext:
        #   runAsUser: 1000
        #   fsGroup: 1000
      config:
        tickTime: 2000
        initLimit: 10
        syncLimit: 5
        maxClientCnxns: 0
        autoPurgeSnapRetainCount: 20
        autoPurgePurgeInterval: 1
      persistence:
        reclaimPolicy: Delete
        spec:
          resources:
            requests:
              storage: "10Gi"
    
  8. Complete the steps in Deploying and verifying ZooKeeper (online and offline).

Deploying and verifying ZooKeeper (online and offline)

Complete these steps to deploy the ZooKeeper instance and create the data store.

  1. Deploy ZooKeeper.

    kubectl apply -f zookeeper.yaml -n instana-clickhouse
    

    ClickHouse needs a dedicated ZooKeeper cluster, so ZooKeeper is installed in the instana-clickhouse namespace.

  2. Verify whether the ZooKeeper operator pod is running.

    kubectl get pods -n instana-clickhouse
    

    Sample output:

    NAME                                          READY   STATUS    RESTARTS   AGE
    instana-zookeeper-operator-7fccd8fd77-9lww5   1/1     Running   0          74s
    

    Verify whether ZooKeeper is working.

    kubectl get all -n instana-clickhouse
    

    See the following sample output.

    NAME                      READY   STATUS    RESTARTS   AGE
    pod/instana-zookeeper-0   1/1     Running   0          119s
    pod/instana-zookeeper-1   1/1     Running   0          82s
    pod/instana-zookeeper-2   1/1     Running   0          47s
    
    NAME                                     TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)                                        AGE
    service/instana-zookeeper-admin-server   ClusterIP   192.168.1.126   <none>        8080/TCP                                       2m
    service/instana-zookeeper-client         ClusterIP   192.168.1.13    <none>        2181/TCP                                       2m
    service/instana-zookeeper-headless       ClusterIP   None            <none>        2181/TCP,2888/TCP,3888/TCP,7000/TCP,8080/TCP   2m
    
    NAME                                 READY   AGE
    statefulset.apps/instana-zookeeper   3/3     2m3s