Preparing to install data store operators on Linux on IBM Z and LinuxONE
To install a third-party data store, your cluster must have access to the supported version of the data store image.
To see the compatible data store versions for your Instana deployment, install the kubectl plug-in and run the kubectl instana --version command. For more information, see the Instana kubectl plug-in topic.
Preparing the cluster for installation
Before you install an Instana component, make sure that you taint and label the node that is intended for the installation. This step dedicates the node exclusively to Instana components, making sure proper resource allocation and isolation.
Your setup must include a minimum of four worker nodes, each with 16 vCPUs and 64 GB of memory.
-
Label the first 4 worker nodes or nodes of your choice by using the kubectl commands.
CLUSTER_NAME=$(hostname|cut -d . -f 2-) for NODE in 0 1 2 3 do oc label node worker${NODE}.${CLUSTER_NAME} node-role.kubernetes.io/monitor="true" oc adm taint node worker${NODE}.${CLUSTER_NAME} node.instana.io/monitor="true":NoSchedule done
-
Make sure that the nodes are tainted and labelled.
Verify that the nodes are labeled correctly.
oc get nodes -l node-role.kubernetes.io/monitor=true
The output must be identical to the following example:
NAME STATUS ROLES AGE VERSION worker0.instana-odf5.cp.fyre.ibm.com Ready monitor,worker 5d5h v1.29.6+aba1e8d worker1.instana-odf5.cp.fyre.ibm.com Ready monitor,worker 5d5h v1.29.6+aba1e8d worker2.instana-odf5.cp.fyre.ibm.com Ready monitor,worker 5d5h v1.29.6+aba1e8d worker3.instana-odf5.cp.fyre.ibm.com Ready monitor,worker 5d5h v1.29.6+aba1e8d
Verify that the nodes are tainted correctly.
kubectl get nodes -o=jsonpath='{range .items[*]}{.metadata.name}{"\t"}{range .spec.taints[*]}{.key}{"="}{.value}{"\t"}{end}{"\n"}{end}' |grep monitor
The expected output should look like this
worker0.instana-odf5.cp.fyre.ibm.com node.instana.io/monitor=true worker1.instana-odf5.cp.fyre.ibm.com node.instana.io/monitor=true worker2.instana-odf5.cp.fyre.ibm.com node.instana.io/monitor=true worker3.instana-odf5.cp.fyre.ibm.com node.instana.io/monitor=true
Preparing for online installation
Prepare for the installation of the data store operators in an online environment.
-
Install Helm
.
curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3 chmod 700 get_helm.sh ./get_helm.sh
-
Make sure that cert-manager
, which is used to automatically provision the secret by default, is installed in your cluster. To install
cert-manager
, run the following command:kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.7.1/cert-manager.yaml
Skip this step if cert-manager is already installed in your cluster.
-
Make sure that you set a default storage class on the cluster on which you are installing the data stores. You need a storage class with
ReadWriteMany
(RWX) orReadWriteOnce
(RWO) access mode. Make sure that you update the storage class that you set in your cluster with RWX or RWO access.To verify whether a default storage class is set in your cluster, run the following command:
kubectl get storageclass -o=jsonpath='{.items[?(@.metadata.annotations.storageclass\.kubernetes\.io\/is-default-class=="true")].metadata.name}'
If the command doesn't return a value, then you need to set a default storage class by running the following command:
kubectl patch storageclass <storageclass_name> -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
-
Add and update the Helm repo for third-party operators. Use your Instana download key as the password value.
helm repo add instana https://artifact-public.instana.io/artifactory/rel-helm-customer-virtual --username=_ --password=<download_key>
helm repo update
Preparing for offline installation
Prepare for an offline (air-gapped) installation.
-
Prepare a bastion host that can access both the internet and your own internal image registry.
-
Install Helm on the bastion host.
curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3 chmod 700 get_helm.sh ./get_helm.sh
-
Add the operator Helm chart repos.
helm repo add instana https://artifact-public.instana.io/artifactory/rel-helm-customer-virtual --username=_ --password=<download_key> helm repo update
-
Download Helm charts.
helm pull instana/ibm-clickhouse-operator --version=v0.1.2 helm pull instana/zookeeper-operator --version=1.0.0 helm pull instana/strimzi-kafka-operator --version=0.46.0 helm pull instana/eck-operator --version=3.0.0 helm pull instana/cloudnative-pg --version=0.24.0 helm pull instana/cass-operator --version=0.57.4 helm pull instana/cert-manager --version=1.13.2
-
Pull operator images.
- Cassandra
docker pull artifact-public.instana.io/self-hosted-images/3rd-party/operator/cass-operator:1.24.0_v0.25.0 docker pull artifact-public.instana.io/self-hosted-images/3rd-party/datastore/system-logger:1.24.0_v0.12.0 docker pull artifact-public.instana.io/self-hosted-images/3rd-party/datastore/k8ssandra-client:0.7.0_v0.16.0 docker pull artifact-public.instana.io/self-hosted-images/3rd-party/datastore/cassandra:4.1.8_v0.23.0
- ClickHouse
docker pull artifact-public.instana.io/clickhouse-operator:v0.1.2 docker pull artifact-public.instana.io/clickhouse-openssl:24.8.12.28-5-lts-ibm
- Elasticsearch
docker pull artifact-public.instana.io/self-hosted-images/3rd-party/operator/elasticsearch:3.0.0_v0.20.0 docker pull artifact-public.instana.io/self-hosted-images/3rd-party/datastore/elasticsearch:8.17.2_v0.18.0
- Kafka
docker pull artifact-public.instana.io/self-hosted-images/3rd-party/operator/strimzi:0.46.0 docker pull artifact-public.instana.io/self-hosted-images/3rd-party/datastore/kafka:0.46.0_v0.18.0
- PostgreSQL by using CloudNativePG
docker pull artifact-public.instana.io/self-hosted-images/3rd-party/operator/cloudnative-pg:v1.26.0_v0.17.0 docker pull artifact-public.instana.io/self-hosted-images/3rd-party/datastore/cnpg-containers:15_v0.19.0
- ZooKeeper
docker pull artifact-public.instana.io/self-hosted-images/3rd-party/operator/zookeeper:0.2.15_v0.18.0 docker pull artifact-public.instana.io/self-hosted-images/3rd-party/datastore/zookeeper:3.9.3_v0.15.0 docker pull artifact-public.instana.io/self-hosted-images/k8s/kubectl:v1.33.0_v0.5.0
- Cassandra
-
If you are using your bastion host as the Instana host in your air-gapped environment, you do not need to complete the following steps. However, if your bastion host and the air-gapped host are different, complete these steps:
-
On your bastion host, download the Helm binary for the operating system of your air-gapped host. For the available binary files, see Installation and Upgrading
. See the following example command.
wget https://get.helm.sh/helm-v3.15.2-linux-s390x.tar.gz
-
Copy the Helm binary file, operator images, and Helm charts from your bastion host to the host that is in your air-gapped environment.
-
Install Helm on the air-gapped host. Run these commands from the location of the Helm binary file.
tar –xvzf https://get.helm.sh/helm-v3.15.2-linux-s390x.tar.gz mv linux-s390x/helm /usr/local/bin/helm
-
-
Create the data stores. For the commands, see Installing the data stores.