Installing Instana on AWS GovCloud in an air-gapped environment
You can install Instana on an OpenShift Container Platform cluster that is running in GovCloud (US) in an air-gapped environment.
Prerequisites
Make sure that the following prerequisites are met:
Procedure
To install Instana on an OpenShift Container Platform cluster that is running in GovCloud in an air-gapped environment, complete the following steps:
-
Install the Instana kubectl plug-in, see Installing the Instana kubectl plug-in.
-
Install Helm:
curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3 chmod 700 get_helm.sh ./get_helm.sh
-
Verify the Instana and Operator versions:
kubectl instana -v
-
Retrieve the list of images on Instana backend:
kubectl instana images > images.txt
-
To access the Instana Artifactory, Run one of the following commands depending on whether you use Skopeo or Docker:
-
Skopeo:
skopeo login -u _ -p <downloadKey> artifact-public.instana.io
-
Docker:
docker login -u _ -p <downloadKey> artifact-public.instana.io
-
-
Copy the images to the internal Red Hat OpenShift image registry:
-
List the Instana images required for installation:
kubectl instana images > images.txt
-
Access the internal Red Hat OpenShift image registry:
oc port-forward svc/image-registry -n openshift-image-registry --address=0.0.0.0 5000:5000
-
Log on to the internal image repository. Run one of the following commands depending on whether you use Skopeo or Docker:
-
With Skopeo:
skopeo login -u openshift -p $(oc whoami -t) localhost:5000
-
With Docker:
docker login -u _ -p <downloadKey> artifact-public.instana.io docker login -u openshift -p $(oc whoami -t) localhost:5000
-
-
Create the namespaces for data stores:
oc create ns instana-zookeeper oc create ns instana-kafka oc create ns instana-clickhouse oc create ns instana-postgres oc create ns instana-cassandra oc create ns instana-elastic oc create ns instana-core oc create ns instana-units oc create ns instana-operator oc create ns cert-manager
-
Copy the images to your own internal image registry from your bastion host. Complete one of the following steps depending on whether you use Skopeo or Docker:
-
Skopeo:
-
Copy the Zookeeper images:
skopeo copy --dest-tls-verify=false docker://artifact-public.instana.io/self-hosted-images/3rd-party/zookeeper-operator:0.2.15_v0.2.0 docker://localhost:5000/instana-zookeeper/zookeeper-operator:0.2.15_v0.2.0 skopeo copy --dest-tls-verify=false docker://artifact-public.instana.io/self-hosted-images/3rd-party/zookeeper:3.8.3_v0.2.0 docker://localhost:5000/instana-zookeeper/zookeeper:3.8.3_v0.2.0 skopeo copy --dest-tls-verify=false docker://lachlanevenson/k8s-kubectl:v1.23.2 docker://localhost:5000/instana-zookeeper/k8s-kubectl:v1.23.2
-
Copy the Strimzi images:
skopeo copy --dest-tls-verify=false docker://artifact-public.instana.io/self-hosted-images/3rd-party/strimzi/operator:0.38.0_v0.3.0 docker://localhost:5000/instana-kafka/operator:0.38.0_v0.3.0 skopeo copy --dest-tls-verify=false docker://artifact-public.instana.io/self-hosted-images/3rd-party/strimzi/kafka:3.6.0_v0.3.0 docker://localhost:5000/instana-kafka/kafka:3.6.0_v0.3.0
-
Copy the Elasticsearch images:
skopeo copy --dest-tls-verify=false docker://artifact-public.instana.io/self-hosted-images/3rd-party/elasticsearch-operator:2.9.0_v0.3.0 docker://localhost:5000/instana-elastic/elasticsearch-operator:2.9.0_v0.3.0 skopeo copy --dest-tls-verify=false docker://artifact-public.instana.io/self-hosted-images/3rd-party/elasticsearch:7.17.14_v0.2.0 docker://localhost:5000/instana-elastic/elasticsearch:7.17.4_v0.2.0
-
Copy the cloud native images:
skopeo copy --dest-tls-verify=false docker://artifact-public.instana.io/self-hosted-images/3rd-party/cloudnative-pg-operator:1.21.1_v0.1.0 docker://localhost:5000/instana-postgres/cloudnative-pg-operator:1.21.1_v0.1.0 skopeo copy --dest-tls-verify=false docker://artifact-public.instana.io/self-hosted-images/3rd-party/cnpg-containers:15_v0.1.0 docker://localhost:5000/instana-postgres/cnpg-containers:15_v0.1.0
-
Copy Cassandra images:
skopeo copy --dest-tls-verify=false docker://artifact-public.instana.io/self-hosted-images/3rd-party/cass-operator:1.18.2_v0.1.0 docker://localhost:5000/instana-cassandra/cass-operator:1.18.2_v0.1.0 skopeo copy --dest-tls-verify=false docker://artifact-public.instana.io/self-hosted-images/3rd-party/k8ssandra-management-api-for-apache-cassandra:4.1.2_v0.2.0 docker://localhost:5000/instana-cassandra/k8ssandra-management-api-for-apache-cassandra:4.1.2_v0.2.0 skopeo copy --dest-tls-verify=false docker://artifact-public.instana.io/self-hosted-images/3rd-party/system-logger:1.18.2_v0.1.0 docker://localhost:5000/instana-cassandra/system-logger:1.18.2_v0.1.0 skopeo copy --dest-tls-verify=false docker://artifact-public.instana.io/self-hosted-images/3rd-party/k8ssandra-k8ssandra-client:0.2.2_v0.1.0 docker://localhost:5000/instana-cassandra/k8ssandra-k8ssandra-client:0.2.2_v0.1.0
-
Copy Clickhouse images:
skopeo copy --dest-tls-verify=false docker://artifact-public.instana.io/clickhouse-operator:v0.1.2 docker://localhost:5000/instana-clickhouse/clickhouse-operator:v0.1.2 skopeo copy --dest-tls-verify=false docker://artifact-public.instana.io/clickhouse-openssl:23.8.9.54-1-lts-ibm docker://localhost:5000/instana-clickhouse/clickhouse-openssl:23.8.9.54-1-lts-ibm
-
Copy the cert-manager images:
skopeo copy --dest-tls-verify=false docker://quay.io/jetstack/cert-manager-controller:v1.13.2 docker://localhost:5000/cert-manager/cert-manager-controller:v1.13.2 skopeo copy --dest-tls-verify=false docker://quay.io/jetstack/cert-manager-webhook:v1.13.2 docker://localhost:5000/cert-manager/cert-manager-webhook:v1.13.2 skopeo copy --dest-tls-verify=false docker://quay.io/jetstack/cert-manager-cainjector:v1.13.2 docker://localhost:5000/cert-manager/cert-manager-cainjector:v1.13.2 skopeo copy --dest-tls-verify=false docker://quay.io/jetstack/cert-manager-acmesolver:v1.13.2 docker://localhost:5000/cert-manager/cert-manager-acmesolver:v1.13.2 skopeo copy --dest-tls-verify=false docker://quay.io/jetstack/cert-manager-ctl:v1.13.2 docker://localhost:5000/cert-manager/cert-manager-ctl:v1.13.2
-
Copy the Instana backend and Operator images:
while read p; do SOURCE=`echo "$p" | tac -s'/' | head -1`; echo $SOURCE; skopeo copy --dest-tls-verify=false docker://artifact-public.instana.io/backend/$SOURCE docker://localhost:5000/instana-core/$SOURCE; done<images.txt OPERATOR_IMAGE=`echo "$(cat images.txt | tail -1)" | tac -s'/' | head -1` skopeo copy --dest-tls-verify=false docker://artifact-public.instana.io/infrastructure/$OPERATOR_IMAGE docker://localhost:5000/instana-operator/$OPERATOR_IMAGE
-
-
Docker
-
Copy Zookeeper images:
-
Pull images:
docker pull artifact-public.instana.io/self-hosted-images/3rd-party/zookeeper-operator:0.2.15_v0.2.0 docker pull artifact-public.instana.io/self-hosted-images/3rd-party/zookeeper:3.8.3_v0.2.0 docker pull lachlanevenson/k8s-kubectl:v1.23.2
-
Rename images:
docker tag artifact-public.instana.io/self-hosted-images/3rd-party/zookeeper-operator:0.2.15_v0.2.0 localhost:5000/instana-zookeeper/zookeeper-operator:0.2.15_v0.2.0 docker tag artifact-public.instana.io/self-hosted-images/3rd-party/zookeeper:3.8.3_v0.2.0 localhost:5000/instana-zookeeper/zookeeper:3.8.3_v0.2.0 docker tag lachlanevenson/k8s-kubectl:v1.23.2 localhost:5000/instana-zookeeper/k8s-kubectl:v1.23.2
-
Push images:
docker push localhost:5000/instana-zookeeper/zookeeper-operator:0.2.15_v0.2.0 docker push localhost:5000/instana-zookeeper/zookeeper:3.8.3_v0.2.0 docker push localhost:5000/instana-zookeeper/k8s-kubectl:v1.23.2
-
-
-
-
-
Install Helm Charts:
-
Add helm repositories:
helm repo add instana https://artifact-public.instana.io/artifactory/rel-helm-customer-virtual --username=_ --password=<AGENT_KEY>
helm repo update
-
Download Helm charts:
helm pull instana/ibm-clickhouse-operator --version=v0.1.2 helm pull instana/zookeeper-operator --version=0.2.15 helm pull instana/strimzi-kafka-operator --version 0.38.0 helm pull instana/eck-operator --version=2.9.0 helm pull instana/cloudnative-pg --version 0.20.0 helm pull instana/cass-operator --version=0.45.2 helm pull instana/cert-manager --version v1.13.2
-
-
Install data store Operator:
-
Create a Zookeeper data store.
-
Install Zookeeper Operator:
helm install instana -n instana-zookeeper --create-namespace zookeeper-operator-0.2.15.tgz --version=0.2.15 --set image.repository=image-registry.openshift-image-registry.svc:5000/instana-zookeeper/zookeeper-operator --set image.tag=0.2.15_v0.2.0 --set hooks.image.repository=image-registry.openshift-image-registry.svc:5000/instana-zookeeper/k8s-kubectl --set hooks.image.tag=v1.23.2
-
Create a YAML file for data store:
apiVersion: "zookeeper.pravega.io/v1beta1" kind: "ZookeeperCluster" metadata: name: "instana-zookeeper" spec: # For all params and defaults, see https://github.com/pravega/zookeeper-operator/tree/master/charts/zookeeper#configuration replicas: 3 image: repository: image-registry.openshift-image-registry.svc:5000/instana-zookeeper/zookeeper tag: 3.8.3_v0.2.0 config: tickTime: 2000 initLimit: 10 syncLimit: 5 maxClientCnxns: 0 autoPurgeSnapRetainCount: 20 autoPurgePurgeInterval: 1 persistence: reclaimPolicy: Delete spec: resources: requests: storage: "10Gi"
-
Provide access to service accounts in Instana Clickhouse:
oc policy add-role-to-group system:image-puller system:serviceaccounts:instana-clikchouse --namespace=instana-zookeeper
-
Install Zookeeper data store:
oc apply -f zookeeper-ds.yaml -n instana-clickhouse
-
-
Create a Strimzi data store:
-
Install Strimzi Kafka Operator:
helm install strimzi ./$(ls | grep 'strimzi*' | head -1) --version 0.38.0 -n instana-kafka --create-namespace --set image.registry=image-registry.openshift-image-registry.svc:5000 --set image.repository=instana-kafka --set image.name=operator --set image.tag=0.38.0_v0.3.0 --set kafka.image.registry=image-registry.openshift-image-registry.svc:5000 --set kafka.image.repository=instana-kafka --set kafka.image.name=kafka --set kafka.image.tag=3.6.0_v0.3.0 --set topicOperator.image.registry=image-registry.openshift-image-registry.svc:5000 --set topicOperator.image.repository=instana-kafka --set topicOperator.image.tag=0.38.0_v0.3.0 --set userOperator.image.registry=image-registry.openshift-image-registry.svc:5000 --set userOperator.image.repository=instana-kafka --set userOperator.image.tag=0.38.0_v0.3.0 --set tlsSidecarEntityOperator.image.registry=image-registry.openshift-image-registry.svc:5000 --set tlsSidecarEntityOperator.image.repository=instana-kafka --set tlsSidecarEntityOperator.image.tag=3.6.0_v0.3.0
-
Create a YAML file for the data store:
apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: instana labels: strimzi.io/cluster: instana spec: kafka: version: 3.6.0 replicas: 3 listeners: - name: scram port: 9092 type: internal tls: false authentication: type: scram-sha-512 configuration: useServiceDnsDomain: true authorization: type: simple superUsers: - strimzi-kafka-user storage: type: jbod volumes: - id: 0 type: persistent-claim size: 50Gi deleteClaim: true zookeeper: replicas: 3 storage: type: persistent-claim size: 5Gi deleteClaim: true entityOperator: template: pod: tmpDirSizeLimit: 100Mi userOperator: image: image-registry.openshift-image-registry.svc:5000/instana-kafka/operator:0.38.0_v0.3.0
-
Deploy Kafka:
kubectl apply -f kafka.yaml -n instana-kafka kubectl wait kafka/instana --for=condition=Ready --timeout=300s -n instana-kafka
-
Create an SASL/SCRAM user in Kafka. Create a YAML file, such as
strimzi-kafka-user.yaml
:apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaUser metadata: name: strimzi-kafka-user labels: strimzi.io/cluster: instana spec: authentication: type: scram-sha-512 authorization: type: simple acls: - resource: type: topic name: '*' patternType: literal operation: All host: "*" - resource: type: group name: '*' patternType: literal operation: All host: "*"
-
Apply the Kafka user:
kubectl apply -f strimzi-kafka-user.yaml -n instana-kafka kubectl wait kafka/instana --for=condition=Ready --timeout=300s -n instana-kafka
-
Retrieve the password of the Strimzi-Kafka user for the next configuration:
kubectl get secret strimzi-kafka-user -n instana-kafka --template='{{index .data.password | base64decode}}' && echo
-
Store the retrieved password in the
config.yaml
file:datastoreConfigs: ... kafkaConfig: adminUser: strimzi-kafka-user adminPassword: <RETRIEVED_FROM_SECRET> consumerUser: strimzi-kafka-user consumerPassword: <RETRIEVED_FROM_SECRET> producerUser: strimzi-kafka-user producerPassword: <RETRIEVED_FROM_SECRET>
-
-
Create an Elasticsearch data store.
-
Install Elasticsearch Operator:
helm install elastic-operator ./$(ls | grep 'eck*' | head -1) -n instana-elastic --create-namespace --version=2.9.0 --set image.repository=image-registry.openshift-image-registry.svc:5000/instana-elastic/elasticsearch-operator --set image.tag=2.9.0_v0.3.0
-
Create a YAML file for the data store:
apiVersion: elasticsearch.k8s.elastic.co/v1 kind: Elasticsearch metadata: name: instana spec: version: 7.17.14 image: image-registry.openshift-image-registry.svc:5000/instana-elastic/elasticsearch:7.17.14_v0.2.0 nodeSets: - name: default count: 3 config: node.master: true node.data: true node.ingest: true node.store.allow_mmap: false volumeClaimTemplates: - metadata: name: elasticsearch-data # Do not change this name unless you set up a volume mount for the data path. spec: accessModes: - ReadWriteOnce resources: requests: storage: 20Gi http: tls: selfSignedCertificate: disabled: true
-
Deploy Elasticsearch:
kubectl apply -f elastic.yaml -n instana-elastic kubectl wait elasticsearch/instana --for=condition=ReconciliationComplete --timeout=300s -n instana-elastic
By default, a user with the name "elastic" is created with a password that is randomly generated.
-
Retrieve the password of the user for the next configuration:
kubectl get secret instana-es-elastic-user -n instana-elastic -o go-template='{{.data.elastic | base64decode}}' && echo
-
Replace <RETRIEVED_FROM_SECRET> in the
config.yaml
file with the password that is retrieved:datastoreConfigs: ... elasticsearchConfig: adminUser: elastic adminPassword: <RETRIEVED_FROM_SECRET> user: elastic password: <RETRIEVED_FROM_SECRET> ...
-
-
Create a Postgres data store.
-
Determine the file system group ID on Red Hat OpenShift. Red Hat OpenShift requires that file system groups are within a range of values specific to the namespace. On the cluster where the CNPG Kubernetes Operator is deployed, run:
kubectl get namespace instana-postgres -o yaml
An output similar to the following example is shown:
apiVersion: v1 kind: Namespace metadata: annotations: ....... openshift.io/sa.scc.uid-range: 1000750000/10000 labels: kubernetes.io/metadata.name: instana-postgres ....... name: instana-postgres
The
openshift.io/sa.scc.supplemental-groups
annotation contains the range of allowed IDs. The range 1000750000/10000 indicates, 10,000 values that start with ID 1000750000 with range of IDs from 1000750000 to 1000760000. In the example, the value 1000750000 is used as a file system group ID.-
Create a "security constraint" file, such as the
postgres-scc.yaml
file:apiVersion: security.openshift.io/v1 kind: SecurityContextConstraints metadata: name: postgres-scc runAsUser: type: MustRunAs uid: 101 seLinuxContext: type: RunAsAny fsGroup: type: RunAsAny allowHostDirVolumePlugin: false allowHostNetwork: true allowHostPorts: true allowPrivilegedContainer: false allowHostIPC: true allowHostPID: true readOnlyRootFilesystem: false users: - system:serviceaccount:instana-postgres:postgres-operator - system:serviceaccount:instana-postgres:postgres-pod - system:serviceaccount:instana-postgres:default
-
Apply the file:
oc apply -f postgres-scc.yaml
-
Install Postgres Operator:
helm install cnpg cloudnative-pg-0.20.0.tgz --set image.repository=image-registry.openshift-image-registry.svc:5000/instana-postgres/cloudnative-pg-operator --set image.tag=1.21.1_v0.1.0 --version=0.20.0 --set containerSecurityContext.runAsUser=<UID from namespace> --set containerSecurityContext.runAsGroup=<UID from namespace> -n instana-postgres
-
Generate a password for the Postgres data store.
-
Generate a random password:
openssl rand -base64 24 | tr -cd 'a-zA-Z0-9' | head -c32; echo
-
Create a file, such as
postgres-secret.yaml
:kind: Secret apiVersion: v1 metadata: name: instanaadmin type: Opaque stringData: username: instanaadmin password: <user-generate-password> # Generated password
-
Apply the
postgres-secrets.yaml
file:kubectl apply -f postgres-secrets.yaml -n instana-postgres
-
-
Create a YAML file, such as
postgres.yaml
for the data store configuration:apiVersion: postgresql.cnpg.io/v1 kind: Cluster metadata: name: postgres spec: instances: 3 imageName: image-registry.openshift-image-registry.svc:5000/instana-postgres/cnpg-containers:15_v0.0.1 imagePullPolicy: IfNotPresent postgresql: parameters: shared_buffers: 32MB pg_stat_statements.track: all auto_explain.log_min_duration: '10s' pg_hba: - local all all trust - host all all 0.0.0.0/0 md5 - local replication standby trust - hostssl replication standby all md5 - hostnossl all all all reject - hostssl all all all md5 managed: roles: - name: instanaadmin login: true superuser: true createdb: true createrole: true passwordSecret: name: instanaadmin bootstrap: initdb: database: instanaadmin owner: instanaadmin secret: name: instanaadmin superuserSecret: name: instanaadmin storage: size: 1Gi
-
Deploy Postgres:
kubectl apply -f postgres.yaml -n instana-postgres
-
Store the generated password in the
config.yaml
file:datastoreConfigs: ... postgresConfigs: - user: instanaadmin password: <USER_GENERATED_PASSWORD> adminUser: instanaadmin adminPassword: <USER_GENERATED_PASSWORD> ...
-
-
Install Cassandra
-
Install Cert-Manager:
helm install cert-manager cert-manager-v1.13.2.tgz --namespace cert-manager --create-namespace --version v1.13.2 --set installCRDs=true --set prometheus.enabled=false --set image.repository=image-registry.openshift-image-registry.svc:5000/cert-manager/cert-manager-controller --set webhook.image.repository=image-registry.openshift-image-registry.svc:5000/cert-manager/cert-manager-webhook --set cainjector.image.repository=image-registry.openshift-image-registry.svc:5000/cert-manager/cert-manager-cainjector --set acmesolver.image.repository=image-registry.openshift-image-registry.svc:5000/cert-manager/cert-manager-acmesolver --set startupapicheck.image.repository=image-registry.openshift-image-registry.svc:5000/cert-manager/cert-manager-ctl
-
Create a file, such as
cassandra-scc.yaml
:apiVersion: security.openshift.io/v1 kind: SecurityContextConstraints metadata: name: cassandra-scc runAsUser: type: MustRunAs uid: 999 seLinuxContext: type: RunAsAny fsGroup: type: RunAsAny allowHostDirVolumePlugin: false allowHostNetwork: true allowHostPorts: true allowPrivilegedContainer: false allowHostIPC: true allowHostPID: true readOnlyRootFilesystem: false users: - system:serviceaccount:instana-cassandra:cass-operator - system:serviceaccount:instana-cassandra:default
-
Apply the file:
oc apply -f cassandra-scc.yaml
-
Install Cass Operator:
helm install cass-operator cass-operator-0.45.2.tgz -n instana-cassandra --create-namespace --version=0.45.2 --set securityContext.runAsGroup=999 --set securityContext.runAsUser=999 --set image.registry=image-registry.openshift-image-registry.svc:5000 --set image.repository=instana-cassandra/cass-operator --set image.tag=1.18.2_v0.1.0
-
Create a file, such as
cassandra.yaml
:apiVersion: cassandra.datastax.com/v1beta1 kind: CassandraDatacenter metadata: name: cassandra spec: clusterName: instana serverType: cassandra serverImage: image-registry.openshift-image-registry.svc:5000/instana-cassandra/k8ssandra-management-api-for-apache-cassandra:4.1.2_v0.2.0 k8ssandraClientImage: image-registry.openshift-image-registry.svc:5000/instana-cassandra/k8ssandra-k8ssandra-client:0.2.2_v0.1.0 systemLoggerImage: image-registry.openshift-image-registry.svc:5000/instana-cassandra/system-logger:1.18.2_v0.1.0 serverVersion: "4.1.2" podTemplateSpec: spec: containers: - name: cassandra managementApiAuth: insecure: {} size: 3 allowMultipleNodesPerWorker: false resources: requests: cpu: 2000m memory: 8Gi limits: cpu: 4000m memory: 16Gi storageConfig: cassandraDataVolumeClaimSpec: accessModes: - ReadWriteOnce resources: requests: storage: 30Gi config: jvm-server-options: initial_heap_size: "4G" max_heap_size: "8G" additional-jvm-opts: - -Dcassandra.allow_unsafe_aggressive_sstable_expiration=true cassandra-yaml: authenticator: org.apache.cassandra.auth.PasswordAuthenticator authorizer: org.apache.cassandra.auth.CassandraAuthorizer role_manager: org.apache.cassandra.auth.CassandraRoleManager memtable_flush_writers: 8 auto_snapshot: false gc_warn_threshold_in_ms: 10000 otc_coalescing_strategy: DISABLED memtable_allocation_type: offheap_objects num_tokens: 256 drop_compact_storage_enabled: true
-
Deploy Cassandra:
oc apply -f cassandra.yaml -n instana-cassandra
By default, CassandraDatacenter creates a superuser, such as
-superuser. is the value that is specified in .spec.clusterName
in thecassandra.yaml
file. The secret name is instana-superuser. -
Retrieve the password of
instana-superuser
:kubectl get secret instana-superuser -n instana-cassandra --template='{{index .data.password | base64decode}}' && echo
-
In the
config.yaml
file, replace<RETRIEVED_FROM_SECRET>
with the password that is retrieved:datastoreConfigs: ... cassandraConfigs: - user: instana-superuser password: <RETRIEVED_FROM_SECRET> adminUser: instana-superuser adminPassword: <RETRIEVED_FROM_SECRET> ...
-
-
Install Clickhouse:
-
Create a file, such as
clickhouse-scc.yaml
:apiVersion: security.openshift.io/v1 kind: SecurityContextConstraints metadata: name: clickhouse-scc runAsUser: type: RunAsAny seLinuxContext: type: RunAsAny fsGroup: type: RunAsAny supplementalGroups: type: RunAsAny allowHostDirVolumePlugin: false allowHostNetwork: true allowHostPorts: true allowPrivilegedContainer: false allowHostIPC: true allowHostPID: true readOnlyRootFilesystem: false users: - system:serviceaccount:instana-clickhouse:clickhouse-operator - system:serviceaccount:instana-clickhouse:clickhouse-operator-ibm-clickhouse-operator - system:serviceaccount:instana-clickhouse:default
-
Apply the file:
oc apply -f clickhouse-scc.yaml
-
Install ClickHouse Operator:
helm install clickhouse-operator ibm-clickhouse-operator-v0.1.2.tgz -n instana-clickhouse --version=v0.1.2 --set operator.image.repository=image-registry.openshift-image-registry.svc:5000/instana-clickhouse/clickhouse-operator --set operator.image.tag=v0.1.2
-
Create a file, such as
clickhouse.yaml
:apiVersion: "clickhouse.altinity.com/v1" kind: "ClickHouseInstallation" metadata: name: "instana" spec: defaults: templates: dataVolumeClaimTemplate: instana-clickhouse-data-volume logVolumeClaimTemplate: instana-clickhouse-log-volume serviceTemplate: service-template configuration: files: config.d/storage.xml: | <clickhouse> <storage_configuration> <disks> <default/> <cold_disk> <path>/var/lib/clickhouse-cold/</path> </cold_disk> </disks> <policies> <logs_policy> <volumes> <data> <disk>default</disk> </data> <cold> <disk>cold_disk</disk> </cold> </volumes> </logs_policy> <logs_policy_v4> <volumes> <tier1> <disk>default</disk> </tier1> <tier2> <disk>cold_disk</disk> </tier2> </volumes> </logs_policy_v4> </policies> </storage_configuration> </clickhouse> clusters: - name: local templates: podTemplate: clickhouse layout: shardsCount: 1 replicasCount: 2 # The the replication count of 2 is fixed for Instana backend installations schemaPolicy: replica: None shard: None zookeeper: nodes: - host: instana-zookeeper-headless.instana-clickhouse profiles: default/max_memory_usage: 10000000000 # If memory limits are set, this value must be adjusted according to the limits. default/joined_subquery_requires_alias: 0 default/max_execution_time: 100 default/max_query_size: 1048576 default/use_uncompressed_cache: 0 default/enable_http_compression: 1 default/load_balancing: random default/background_pool_size: 32 default/background_schedule_pool_size: 32 default/distributed_directory_monitor_split_batch_on_failure: 1 default/distributed_directory_monitor_batch_inserts: 1 default/insert_distributed_sync: 1 default/log_queries: 1 default/log_query_views: 1 default/max_threads: 16 default/allow_experimental_database_replicated: 1 quotas: default/interval/duration: 3600 default/interval/queries: 0 default/interval/errors: 0 default/interval/result_rows: 0 default/interval/read_rows: 0 default/interval/execution_time: 0 settings: remote_servers/all-sharded/secret: clickhouse-default-pass remote_servers/all-replicated/secret: clickhouse-default-pass remote_servers/local/secret: clickhouse-default-pass max_concurrent_queries: 200 max_table_size_to_drop: 0 max_partition_size_to_drop: 0 users: default/password: "sXOe8Kk4" clickhouse-user/networks/ip: "::/0" clickhouse-user/password_sha256_hex: "4417ddef8349cce2351f5f3e9e377b53390c09c8c133d25b02ce66b5e3ab090c" # Or # Generate password and the corresponding SHA256 hash with: # $ PASSWORD=$(base64 < /dev/urandom | head -c8); echo "$PASSWORD"; echo -n "$PASSWORD" | sha256sum | tr -d '-' # 6edvj2+d <- first line is the password # a927723f4a42cccc50053e81bab1fcf579d8d8fb54a3ce559d42eb75a9118d65 <- second line is the corresponding SHA256 hash # clickhouse-user/password_sha256_hex: "a927723f4a42cccc50053e81bab1fcf579d8d8fb54a3ce559d42eb75a9118d65" # Or # Generate password and the corresponding SHA1 hash with: # $ PASSWORD=$(base64 < /dev/urandom | head -c8); echo "$PASSWORD"; echo -n "$PASSWORD" | sha1sum | tr -d '-' | xxd -r -p | sha1sum | tr -d '-' # LJfoOfxl <- first line is the password, put this in the k8s secret # 3435258e803cefaab7db2201d04bf50d439f6c7f <- the corresponding double SHA1 hash, put this below # clickhouse-user/password_double_sha1_hex: "3435258e803cefaab7db2201d04bf50d439f6c7f" templates: podTemplates: - name: clickhouse spec: containers: - name: instana-clickhouse image: image-registry.openshift-image-registry.svc:5000/instana-clickhouse/clickhouse-openssl:23.8.9.54-1-lts-ibm command: - clickhouse-server - --config-file=/etc/clickhouse-server/config.xml volumeMounts: - mountPath: /var/lib/clickhouse-cold/ name: instana-clickhouse-data-cold-volume - name: clickhouse-log image: image-registry.openshift-image-registry.svc:5000/instana-clickhouse/clickhouse-openssl:23.8.9.54-1-lts-ibm args: - while true; do sleep 30; done; command: - /bin/sh - -c - -- securityContext: fsGroup: 0 runAsGroup: 0 runAsUser: 1001 # Optional - uncomment the lines below if resources need to be specifically defined for the clickhouse pods. The values below are for example only. # resources: # limits: # cpu: "4" # memory: 4Gi # requests: # cpu: "1" # memory: 2Gi volumeClaimTemplates: - name: instana-clickhouse-data-volume spec: accessModes: - ReadWriteOnce resources: requests: storage: 100Gi - name: instana-clickhouse-data-cold-volume spec: accessModes: - ReadWriteOnce resources: requests: storage: 100Gi - name: instana-clickhouse-log-volume spec: accessModes: - ReadWriteOnce resources: requests: storage: 1Gi serviceTemplates: - name: service-template generateName: "clickhouse-{chi}" spec: ports: - name: http port: 8123 - name: tcp port: 9000 type: ClusterIP
By default,
clickhouse-user
andclickhouse-pass
are added to ClickHouse. -
Update the
zookeeper.nodes.host
field with the hostname of your ZooKeeper cluster.
-
-
-
Install Instana Enterprise operator. Create and configure a values file to use in the
Instana-Operator
file: -
Generate a values file:
kubectl instana operator template --output-dir $(pwd)/opr -n instana-operator
-
Add the following lines to the values file:
image: registry: image-registry.openshift-image-registry.svc:5000 repository: instana-operator/instana-operator imagePullSecrets: []
-
Apply the changes:
kubectl instana operator apply -n instana-operator --values <pathtoFile>/values.yaml
-
Verify Instana Enterprise operator deployment:
kubectl get all -n instana-operator
-
Download the Instana license:
kubectl instana license download --sales-key <redacted>
-
Define storage:
Instana requires the storage type ReadWriteMany to store rawSpans and monitor data with NFS or Ceph. In AWS GovCloud or Public Cloud, Instana can use S3 Buckets or GCS.
-
Create a S3 bucket in the same region where the cluster is deployed, create IAM User or IAM Role, and grant required IAM permissions by using IAM Policy, complete one of the following steps:
-
Provision IAM Role, IAM Policy AWS Console:
-
Create an IAM Policy with the privileges.
-
Create an IAM Role and associate the IAM role:
-
Obtain the OIDC Connect Provider ARN, go to https://us-east-1.console.aws.amazon.com/iam/home?region=us-west-1#/identity_providers.
-
Update to region
gov-cloud
.
-
-
In IAM Role Trust Relationship, update the file as shown in the following example:
{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "Federated": "<ReplaceWithActualValue>" }, "Action": "sts:AssumeRoleWithWebIdentity", "Condition": { "StringEquals": { "oidc.op1.openshiftapps.com/<ReplaceWithActual>:aud": "sts.amazonaws.com", "oidc.op1.openshiftapps.com/<ReplaceWithActual>:sub": "system:serviceaccount:instana-core:instana-core" } } } ] }
-
-
Provision IAM Role and IAM Policy by using Red Hat OpenShift Cloud Credential Operator:
-
Create a
CredentialsRequest
in OpenShift Container Platform Cluster:apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: instana-backend-storage namespace: openshift-cloud-credential-operator spec: secretRef: name: instana-backend-storage namespace: instana-core providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AWSProviderSpec statementEntries: - effect: Allow action: - s3:CreateBucket - s3:DeleteBucket - s3:PutBucketTagging - s3:GetBucketTagging - s3:PutBucketPublicAccessBlock - s3:GetBucketPublicAccessBlock - s3:PutEncryptionConfiguration - s3:GetEncryptionConfiguration - s3:PutLifecycleConfiguration - s3:GetLifecycleConfiguration - s3:GetBucketLocation - s3:ListBucket - s3:GetObject - s3:PutObject - s3:DeleteObject - s3:ListBucketMultipartUploads - s3:AbortMultipartUpload - s3:ListMultipartUploadParts resource: - arn:aws:s3:::<ReplaceWithActualBucketName> - arn:aws:s3:::<ReplaceWithActualBucketName>/*
-
Copy AWS AccessKey and Secret Key from secret
instana-backend-storage instana-core
:kubectl get secrets instana-backend-storage -n instana-core -o yaml
-
-
-
-
Add an image-puller role in project
instana-units
to pull images from projectinstana-core
:oc policy add-role-to-group system:image-puller system:serviceaccounts:instana-units --namespace=instana-core
-
Install Instana backend, see Installing the Instana backend.