Creating a ZooKeeper data store on Linux on IBM Z and LinuxONE
Install the ZooKeeper operator and set up the data store.
- Before you begin
- ZooKeeper operator versions and image tags for deployment
- Installing ZooKeeper online
- Install ZooKeeper offline
- Deploying and verifying ZooKeeper (online and offline)
Before you begin
Make sure that you prepared your online and offline host to pull images from the external repository. Also, ensure that the correct Helm repo is added.
For more information, see Preparing for data store installation.
Installing ZooKeeper online
Complete these steps to install the ZooKeeper data store.
-
Create the
instana-zookeeper
namespace:kubectl create namespace instana-zookeeper
-
Create image pull secrets for the
instana-zookeeper
namespace. Update the<download_key>
value with your own download key.kubectl create secret docker-registry instana-registry --namespace instana-zookeeper \ --docker-username=_ \ --docker-password=<download_key> \ --docker-server=artifact-public.instana.io
-
Install the ZooKeeper operator.
helm install zookeeper-operator instana/zookeeper-operator -n instana-zookeeper --version=1.0.0 --set image.repository=artifact-public.instana.io/self-hosted-images/z/ds-operator-images/zookeeper/zookeeper-operator --set image.tag=0.2.15_v0.1.0 --set global.imagePullSecrets[0]=instana-registry --no-hooks
-
To install the ZooKeeper instances, you need the
instana-clickhouse
namespace. Check whether theinstana-clickhouse
namespace exists in your cluster. If it does not exist, you can create it now.- Check whether the
instana-clickhouse
namespace exists.kubectl get namespace | grep clickhouse
- If the
instana-clickhouse
namespace does not exist, create it now.kubectl create namespace instana-clickhouse
- Check whether the
-
Create image pull secrets for the
instana-clickhouse
namespace. Update the<download_key>
value with your own download key.kubectl create secret docker-registry instana-registry \ --namespace=instana-clickhouse \ --docker-username=_ \ --docker-password=<download_key> \ --docker-server=artifact-public.instana.io
-
Create a YAML file, for example
zookeeper.yaml
, with the ZooKeeper configuration.apiVersion: "zookeeper.pravega.io/v1beta1" kind: "ZookeeperCluster" metadata: name: "instana-zookeeper" spec: # For all params and defaults, see https://github.com/pravega/zookeeper-operator/tree/master/charts/zookeeper#configuration replicas: 3 image: repository: "artifact-public.instana.io/self-hosted-images/z/ds-operator-images/zookeeper/zookeeper" tag: "3.8.3_v0.3.0" pullPolicy: "Always" pod: imagePullSecrets: - name: instana-registry serviceAccountName: zookeeper-operator config: tickTime: 2000 initLimit: 10 syncLimit: 5 maxClientCnxns: 0 autoPurgeSnapRetainCount: 20 autoPurgePurgeInterval: 1 persistence: reclaimPolicy: Delete spec: resources: requests: storage: "10Gi"
-
Complete the steps in Deploying and verifying ZooKeeper (online and offline).
Install ZooKeeper offline
If you didn't yet pull the ClickHouse images from the external registry when you prepared for installation, you can pull them now. Run the following commands on your bastion host. Then, copy the images to your Instana host that is in your air-gapped environment.
docker pull artifact-public.instana.io/self-hosted-images/z/ds-operator-images/zookeeper/zookeeper-operator:0.2.15_v0.1.0
docker pull artifact-public.instana.io/self-hosted-images/z/ds-operator-images/zookeeper/zookeeper:3.8.3_v0.3.0
docker pull artifact-public.instana.io/self-hosted-images/k8s/kubectl:v1.31.0_v0.1.0
Complete the following steps on your Instana host.
-
Retag the images to your internal image registry.
docker tag artifact-public.instana.io/self-hosted-images/z/ds-operator-images/zookeeper/zookeeper-operator:0.2.15_v0.1.0 <internal-image-registry>/zookeeper/zookeeper-operator:0.2.15_v0.1.0 docker tag artifact-public.instana.io/self-hosted-images/z/ds-operator-images/zookeeper/zookeeper:3.8.3_v0.3.0 <internal-image-registry>/zookeeper/zookeeper:3.8.3_v0.3.0 docker tag artifact-public.instana.io/self-hosted-images/k8s/kubectl:v1.31.0_v0.1.0 <internal-image-registry>/self-hosted-images/k8s/kubectl:v1.31.0_v0.1.0
-
Push the images to your internal image registry on your bastion host.
docker push <internal-image-registry>/zookeeper/zookeeper-operator:0.2.15_v0.1.0 docker push <internal-image-registry>/zookeeper/zookeeper:3.8.3_v0.3.0 docker push <internal-image-registry>/self-hosted-images/k8s/kubectl:v1.31.0_v0.1.0
-
Create the
instana-zookeeper
namespace.kubectl create namespace instana-zookeeper
-
Optional: Create an image pull secret if your internal image registry needs authentication.
kubectl create secret docker-registry <secret_name> --namespace instana-zookeeper \ --docker-username=<registry_username> \ --docker-password=<registry_password> \ --docker-server=<internal-image-registry>:<internal-image-registry-port> \ --docker-email=<registry_email>
-
Install the ZooKeeper operator. If you created an image pull secret in the previous step, add
--set imagePullSecrets[0].name="<internal-image-registry-pull-secret>"
to the following command.helm install zookeeper-operator zookeeper-operator-1.0.0.tgz -n instana-zookeeper --version=1.0.0 --set image.repository=<internal-image-registry>/zookeeper/zookeeper-operator --set image.tag=0.2.15_v0.1.0 --no-hooks
-
To install the ZooKeeper instances, you need the
instana-clickhouse
namespace. Check whether theinstana-clickhouse
namespace exists in your cluster. If it does not exist, you can create it now.- Check whether the
instana-clickhouse
namespace exists.kubectl get namespace | grep clickhouse
- If the
instana-clickhouse
namespace does not exist, create it now.kubectl create namespace instana-clickhouse
- Check whether the
-
Create a YAML file, for example
zookeeper.yaml
, with the ZooKeeper configuration.apiVersion: "zookeeper.pravega.io/v1beta1" kind: "ZookeeperCluster" metadata: name: "instana-zookeeper" spec: # For all params and defaults, see https://github.com/pravega/zookeeper-operator/tree/master/charts/zookeeper#configuration replicas: 3 image: repository: "<internal-image-registry>/zookeeper/zookeeper:3.8.3" tag: "3.8.3_v0.3.0" pullPolicy: "Always" pod: serviceAccountName: zookeeper-operator config: tickTime: 2000 initLimit: 10 syncLimit: 5 maxClientCnxns: 0 autoPurgeSnapRetainCount: 20 autoPurgePurgeInterval: 1 persistence: reclaimPolicy: Delete spec: resources: requests: storage: "10Gi"
-
Complete the steps in Deploying and verifying ZooKeeper (online and offline).
Deploying and verifying ZooKeeper (online and offline)
Complete these steps to deploy the ZooKeeper instance and create the data store.
-
Deploy ZooKeeper.
kubectl apply -f zookeeper.yaml -n instana-clickhouse
ClickHouse needs a dedicated ZooKeeper cluster, so ZooKeeper is installed in the
instana-clickhouse
namespace. -
Verify whether the ZooKeeper operator pod is running.
kubectl get pods -n instana-clickhouse
The following output is a sample output.
NAME READY STATUS RESTARTS AGE
instana-zookeeper-operator-7fccd8fd77-9lww5 1/1 Running 0 74s
Verify whether ZooKeeper is working.
kubectl get all -n instana-clickhouse
See the following sample output.
NAME READY STATUS RESTARTS AGE
pod/instana-zookeeper-0 1/1 Running 0 119s
pod/instana-zookeeper-1 1/1 Running 0 82s
pod/instana-zookeeper-2 1/1 Running 0 47s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/instana-zookeeper-admin-server ClusterIP 192.168.1.126 <none> 8080/TCP 2m
service/instana-zookeeper-client ClusterIP 192.168.1.13 <none> 2181/TCP 2m
service/instana-zookeeper-headless ClusterIP None <none> 2181/TCP,2888/TCP,3888/TCP,7000/TCP,8080/TCP 2m
NAME READY AGE
statefulset.apps/instana-zookeeper 3/3 2m3s