Restarting the environment (IBM Cloud Pak for AIOps on Linux)
Learn how to shutdown and restart the Linux cluster where IBM Cloud Pak for AIOps is deployed.
Overview
Use this procedure before a known maintenance window or outage to shut down the Linux cluster where IBM Cloud Pak for AIOps is installed, and to restart the cluster and workloads afterward.
Warning: If you need to shut down the cluster where IBM Cloud Pak for AIOps is installed, then you must use the following procedure. Failure to do so can result in data loss or corruption.
Procedure
1. Validate the installation
Run the describe command:
kubectl describe installations.orchestrator.aiops.ibm.com -n aiops
Review the ComponentStatus
fields to confirm that all components are marked as Ready
and the phase is Running
.
Example output:
Name: ibm-cp-aiops
Namespace: aiops
API Version: orchestrator.aiops.ibm.com/v1alpha1
Kind: Installation
Spec:
...
Status:
Componentstatus:
Aimanager: Ready
Aiopsanalyticsorchestrator: Ready
Aiopsedge: Ready
Aiopsui: Ready
Asm: Ready
Baseui: Ready
Cluster: Ready
Commonservice: Ready
Elasticsearch: Ready
Flinkdeployment: Ready
Issueresolutioncore: Ready
Kafka: Ready
Lifecycleservice: Ready
Lifecycletrigger: Ready
Rediscp: Ready
Tunnel: Ready
Zenservice: Ready
Phase: Running
2. Check the certificates
Ensure that none of the certificates have problems or are expired.
Run the following command:
while read l; do echo "$l" | grep '^NAME' || (n=$(echo $l | sed 's/ .*//'); s=$(echo $l | sed 's/^[^ ]* *\([^ ]*\).*/\1/'); x=$(kubectl get secret -n $n $s -o jsonpath='{.data.tls\.crt}' | base64 -d | openssl x509 -noout -enddate 2>/dev/null | sed 's!notAfter=!!'); echo "$l" | sed 's![^ ][^ ]*$!'"$x"'!'); done< <(kubectl get secret -A --field-selector=type==kubernetes.io/tls -o custom-columns=NAMESPACE:.metadata.namespace,NAME:.metadata.name,EXPIRY:.metadata.name)
Example output excerpt:
ibm-licensing ibm-license-service-cert Jan 8 13:32:07 2025 GMT
ibm-licensing ibm-license-service-cert-internal Jan 7 13:31:12 2026 GMT
ibm-licensing ibm-licensing-service-prometheus-cert Jan 7 13:31:25 2026 GMT
aiops aimanager-aio-log-anomaly-feedback-learning-cert Jan 7 14:01:43 2026 GMT
aiops aimanager-aio-log-anomaly-golden-signals-cert Jan 7 14:01:43 2026 GMT
aiops aimanager-aio-oob-recommended-actions-cert Jan 7 14:01:43 2026 GMT
<...>
Renew or re-create any certificates that have problems, are expired, or will expire before the cluster is restarted.
3. Prepare to scale down
-
Cordon all of the worker and control plane nodes.
From a control plane node, run the following command for each of the worker and control plane nodes:
kubectl cordon <node>
Where
<node>
is the name of the node to cordon. -
Make a note of the number of replicas.
-
Make a note of the number of replicas for each StatefulSet.
kubectl get statefulsets -n aiops
Example output:
NAME READY AGE aimanager-ibm-minio 1/1 42m aiops-ibm-elasticsearch-es-server-all 1/1 86m aiops-installation-redis-server 3/3 84m aiops-ir-analytics-spark-worker 2/2 63m aiops-ir-core-ncobackup 0/0 75m aiops-ir-core-ncoprimary 0/0 76m aiops-topology-cassandra 1/1 83m c-example-couchdbcluster-m 1/1 77m zen-minio 3/3 76m
Note:
- If you do not have a IBM® Netcool® Operations Insight® probe integration, then
aiops-ir-core-ncobackup
andaiops-ir-core-ncoprimary
has zero replicas. - If you upgraded from an earlier version of IBM Cloud Pak for AIOps, you also have an
icp-mongodb
StatefulSet.
- If you do not have a IBM® Netcool® Operations Insight® probe integration, then
-
Make a note of the number of replicas for each StrimziPodSet.
kubectl get strimzipodset -n aiops
Example output:
NAME PODS READY PODS CURRENT PODS AGE iaf-system-kafka 3 3 3 13d iaf-system-zookeeper 3 3 3 13d
-
Make a note of the number of replicas for each Flink deployment by using the following command:
kubectl get deployment | grep flink | grep -v "operator"
Example output:
NAME PODS READY PODS CURRENT PODS AGE aiops-ir-lifecycle-flink 1/1 1 1 137m aiops-ir-lifecycle-flink-taskmanager 1/1 1 1 137m aiops-lad-flink 1/1 1 1 139m aiops-lad-flink-taskmanager 2/2 2 2 139m
Notes:
- If you have a base deployment, then
aiops-lad-flink
andaiops-lad-flink-taskmanager
do not show in the preceding output. - If you upgraded from an earlier version of IBM Cloud Pak for AIOps, you also have an
icp-mongodb
StatefulSet.
- If you have a base deployment, then
-
4. Scale down the workloads and drain the nodes
-
Scale down the operator deployments in the IBM Cloud Pak for AIOps namespace.
kubectl scale deployment -l olm.owner.kind=ClusterServiceVersion -n aiops --replicas=0
Run the following command to check that the number of replicas for each of the operator deployments is now 0.
kubectl get deployment -n aiops -l olm.owner.kind=ClusterServiceVersion
Example output:
NAME READY UP-TO-DATE AVAILABLE AGE aimanager-operator-controller-manager 0/0 0 0 47m aiopsedge-operator-controller-manager 0/0 0 0 47m asm-operator 0/0 0 0 47m iaf-flink-operator-controller-manager 0/0 0 0 54m ibm-aiops-orchestrator-controller-manager 0/0 0 0 58m ibm-common-service-operator 0/0 0 0 56m ibm-commonui-operator 0/0 0 0 53m ibm-elastic-operator-controller-manager 0/0 0 0 54m ibm-events-operator-v5.0.1 0/0 0 0 54m ibm-iam-operator 0/0 0 0 54m ibm-ir-ai-operator-controller-manager 0/0 0 0 47m ibm-redis-cp-operator 0/0 0 0 49m ibm-secure-tunnel-operator 0/0 0 0 48m ibm-watson-aiops-ui-operator-controller-manager 0/0 0 0 48m ibm-zen-operator 0/0 0 0 54m ir-core-operator-controller-manager 0/0 0 0 47m ir-lifecycle-operator-controller-manager 0/0 0 0 47m operand-deployment-lifecycle-manager 0/0 0 0 55m postgresql-operator-controller-manager-1-18-12 0/0 0 0 54m
Note: If you upgraded from an earlier version of IBM Cloud Pak for AIOps, you also have an
icp-mongodb-operator
deployment. -
Scale down the StatefulSets that you noted in step 3.2.
You can use the Cloud Pak for AIOps console, or create a shell script to do this.
If you have a base deployment, then remove the following lines from the example shell script:
kubectl scale deployment aiops-lad-flink --replicas=0 -n aiops kubectl scale deployment aiops-lad-flink-taskmanager --replicas=0 -n aiops
Note:
- If you have a base deployment, then
aiops-lad-flink
andaiops-lad-flink-taskmanager
do not show in the preceding output. - If you upgraded from an earlier version of IBM Cloud Pak for AIOps, you also have an
icp-mongodb
StatefulSet.
If you upgraded from an earlier version of IBM Cloud Pak for AIOps, then add the following line to the example shell script:
kubectl scale statefulsets icp-mongodb --replicas=0 -n aiops
Example shell script:
#!/bin/bash kubectl scale statefulsets aimanager-ibm-minio --replicas=0 -n aiops sleep 2 kubectl scale statefulsets aiops-installation-redis-server --replicas=0 -n aiops sleep 2 kubectl scale statefulsets aiops-ir-analytics-spark-worker --replicas=0 -n aiops sleep 2 kubectl scale statefulsets aiops-ir-core-ncobackup --replicas=0 -n aiops sleep 2 kubectl scale statefulsets aiops-ir-core-ncoprimary --replicas=0 -n aiops sleep 2 kubectl scale deployment aiops-ir-lifecycle-flink --replicas=0 -n aiops sleep 2 kubectl scale deployment aiops-ir-lifecycle-flink-taskmanager --replicas=0 -n aiops sleep 2 kubectl scale statefulsets aiops-topology-cassandra --replicas=0 -n aiops sleep 2 kubectl scale statefulsets c-example-couchdbcluster-m --replicas=0 -n aiops sleep 2 kubectl scale deployment aiops-lad-flink --replicas=0 -n aiops sleep 2 kubectl scale deployment aiops-lad-flink-taskmanager --replicas=0 -n aiops sleep 2 kubectl scale statefulsets aiops-ibm-elasticsearch-es-server-all --replicas=0 -n aiops sleep 2 kubectl scale statefulsets zen-minio --replicas=0 -n aiops sleep 2
Run the following command to check that the number of replicas for each of the StatefulSets is now 0.
kubectl get statefulsets -n aiops
Example output:
NAME READY AGE aimanager-ibm-minio 0/0 42m aiops-ibm-elasticsearch-es-server-all 0/0 86m aiops-installation-redis-server 0/0 84m aiops-ir-analytics-spark-worker 0/0 63m aiops-ir-core-ncobackup 0/0 75m aiops-ir-core-ncoprimary 0/0 76m aiops-topology-cassandra 0/0 83m c-example-couchdbcluster-m 0/0 77m zen-minio 0/0 76m
- If you have a base deployment, then
-
Run the following command to check that the number of replicas for each of the Flink Deployments is now 0.
kubectl get deployments -n aiops | grep flink | grep -v "operator"
Example output:
NAME READY UP-TO-DATE AVAILABLE AGE aiops-ir-lifecycle-flink 0/0 0 0 137m aiops-ir-lifecycle-flink-taskmanager 0/0 0 0 137m aiops-lad-flink 0/0 0 0 139m aiops-lad-flink-taskmanager 0/0 0 0 139m
-
Shutdown the
Kafka
andZooKeeper
pods.kubectl delete pod -l ibmevents.ibm.com/name=iaf-system-kafka -n aiops kubectl delete pod -l ibmevents.ibm.com/name=iaf-system-zookeeper -n aiops
Run the following command to check that the Kafka and ZooKeeper pods have successfully shutdown. If the shutdown is complete, no pods are returned.
kubectl get pod -l ibmevents.ibm.com/controller=strimzipodset -n aiops
-
Scale down the PostgreSQL pods.
When shutting down a PostgreSQL cluster, it is best to remove the primary replica last. The following script removes each database replica in the cluster with the primary removed last.
#!/bin/bash AIOPS_NAMESPACE=aiops # Get array of Postgres clusters CLUSTERS=($(kubectl get clusters.postgresql.k8s.enterprisedb.io -n "${AIOPS_NAMESPACE}" -o go-template='{{range .items}}{{.metadata.name}}{{" "}}{{end}}')) # For each Postgres cluster, shutdown primary last for cluster_name in "${CLUSTERS[@]}"; do primary=$(kubectl get clusters.postgresql.k8s.enterprisedb.io -n "${AIOPS_NAMESPACE}" "${cluster_name}" -o go-template='{{.status.currentPrimary}}') instances=($(kubectl get clusters.postgresql.k8s.enterprisedb.io -n "${AIOPS_NAMESPACE}" "${cluster_name}" -o go-template='{{range .status.instanceNames}}{{print . " "}}{{end}}')) for instance_name in "${instances[@]}"; do # Shutdown non-primary replicas if [ "${instance_name}" != "${primary}" ]; then kubectl delete pod -n "${AIOPS_NAMESPACE}" "${instance_name}" --ignore-not-found fi done # Shutdown the primary once all other replicas are down kubectl delete pod -n "${AIOPS_NAMESPACE}" "${primary}" --ignore-not-found done
Wait for all the Postgres pods to be deleted. All pods are deleted when the following command returns no pods:
kubectl get pod -l k8s.enterprisedb.io/podRole=instance -n aiops
-
(Optional) After the StatefulSets and StrimziPodSets are scaled down, drain all of the worker and control plane nodes. This step is not necessary if you are running a backup.
From a control plane node, run the following command for each of the worker and control plane nodes:
kubectl drain <node>
Where
<node>
is the name of the node to drain.Note: Some pods, such as storage pods, do not stop because this would violate the disruption budget. If this problem occurs, run the commands in each node until only the storage pods are left, and then stop the command and drain the next node.
5. Shut down the cluster
-
Shut down all the worker nodes on the cluster.
-
Shut down all the control plane nodes on the cluster.
6. Restart the cluster
-
Re-export the environment variables that you saved in step 1.1.
-
Restart the cluster nodes in the following order:
-
Restart the control plane nodes. Check whether all the control plane nodes are in
ready
status by running the following command:kubectl get nodes
-
Restart the worker nodes. Check whether all worker nodes are in
ready
status by running the following command:kubectl get nodes
-
-
After all the nodes are up, uncordon the control plane and worker nodes.
From a control plane node, run the following command for each of the worker and control plane nodes:
kubectl uncordon <node>
Where
<node>
is the name of the node to uncordon.
7. Scale up the workloads
Scaling up the workloads in the following order helps to minimize startup time and resource contention issues.
-
Scale the events operator back up.
kubectl scale deployment --replicas=1 $(kubectl get deployment -o custom-columns=NAME:.metadata.name --no-headers -n aiops | grep '^ibm-events-operator-') -n aiops
-
Check whether the Kafka and Zookeeper pods are running again. This can take a few minutes.
kubectl get pod -l ibmevents.ibm.com/controller=strimzipodset -n aiops
Example output when the Kafka and Zookeeper pods are running:
NAME READY STATUS RESTARTS AGE iaf-system-kafka-0 1/1 Running 0 13d iaf-system-kafka-1 1/1 Running 0 13d iaf-system-kafka-2 1/1 Running 0 13d iaf-system-zookeeper-0 1/1 Running 0 13d iaf-system-zookeeper-1 1/1 Running 0 13d iaf-system-zookeeper-2 1/1 Running 0 13d
-
You need to scale up each of the StatefulSets to the number of replicas as noted in step 3.2.
-
Scale up Cassandra, Elasticsearch, and Spark StatefulSets in the following order:
- aiops-topology-cassandra
- aiops-ibm-elasticsearch-es-server-all
- aiops-ir-analytics-spark-worker
Run the following command to scale up each StatefulSet:
kubectl scale statefulsets <statefulset> --replicas=<number of replicas> -n aiops
Where:
<statefulset>
is the StatefulSet to be scaled up<number_of_replicas>
is the number of replicas the StatefulSet it to be scaled up to
For example,
kubectl scale statefulsets aiops-topology-cassandra --replicas=1 -n aiops
-
Scale the Flink deployments in the following order:
- aiops-ir-lifecycle-flink
- aiops-ir-lifecycle-flink-taskmanager
- aiops-lad-flink
- aiops-lad-flink-taskmanager
Note: If you have a base deployment, then do not scale up
aiops-lad-flink
andaiops-lad-flink-taskmanager
.Run the following command to scale up the Flink deployments:
kubectl scale deployment <flink_deployment> --replicas=<number of replicas> -n aiops
Where
<flink_deployment>
is the name of the Flink deployment. -
Scale the following StatefulSets in the specified order:
- aiops-ir-core-ncoprimary
- aiops-ir-core-ncobackup
- c-example-couchdbcluster-m
- aiops-installation-redis-server
- aimanager-ibm-minio
- zen-minio
Run the following command to scale up each StatefulSet:
kubectl scale statefulsets <statefulset> --replicas=<number of replicas> -n aiops
Where:
<statefulset>
is the StatefulSet to be scaled up<number_of_replicas>
is the number of replicas the StatefulSet it to be scaled up to
-
-
Scale up the operator deployments.
kubectl scale deployment -l olm.owner.kind=ClusterServiceVersion -n aiops --replicas=1
8. Validate the installation
Note: After a complete cluster restart, it might take approximately an hour for the installation to start running again.
Run the describe command:
kubectl describe installations.orchestrator.aiops.ibm.com -n aiops
Review the ComponentStatus
fields to confirm that all components are marked as Ready
and the phase is Running
.