Manually shutting down Guardium Data Security Center
Shut down the Guardium Data Security Center pods to stop the Guardium Data Security Center service.
Before you begin
oc project
command. For
example:oc project <guardium_data_security_center_namespace>
This task requires the use of Guardium Data Security Center support scripts. To learn how to access these scripts, see this topic.
Procedure
- Halt all incoming data streams.
- Suspend all cron jobs by issuing this command:
oc patch `oc get cronjobs -n=$<guardium_data_security_center_namespace> -oname` -p '{"spec" : {"suspend" : true }}'
where
<guardium_data_security_center_namespace>
is the Guardium Data Security Center OpenShift namespace that you created when preparing your environment.The output is similar to the following example:
cronjob.batch/staging-ibm-insights-sequencer-datamart-cleanup patched cronjob.batch/staging-ibm-insights-sequencer-reports-cleanup patched cronjob.batch/staging-tenant-fetcher-ae patched cronjob.batch/staging-tenant-fetcher-ap patched cronjob.batch/staging-tenant-fetcher-dr patched cronjob.batch/staging-tenant-fetcher-emaild patched cronjob.batch/staging-tenant-fetcher-emailp patched cronjob.batch/staging-tenant-fetcher-groupb patched
- Verify that the Kafka system is drained by following the steps in this topic.
- To confirm that Guardium Data Security Center
is not currently writing to the database, use the StoporStartGuardiumDSCs.sh
support script to stop the Guardium Data Security Center
services:
./scripts/startorstopGuardiumDCSs.sh stop
The output is similar to the following example:
deployment.extensions/staging-analytics-events scaled deployment.extensions/staging-analytics-extract scaled deployment.extensions/staging-apigateway scaled deployment.extensions/staging-assets scaled deployment.extensions/staging-audit scaled deployment.extensions/staging-configuration scaled deployment.extensions/staging-data-retention scaled deployment.extensions/staging-datamart-processor scaled deployment.extensions/staging-db2-store scaled deployment.extensions/staging-fetch scaled deployment.extensions/staging-group-builder scaled deployment.extensions/staging-guardium-agent-cert-generator scaled deployment.extensions/staging-guardium-connector scaled deployment.extensions/staging-health-collector scaled deployment.extensions/staging-insights scaled deployment.extensions/staging-jumpbox scaled deployment.extensions/staging-mini-snif scaled deployment.extensions/staging-notifications scaled deployment.extensions/staging-outlier2 scaled deployment.extensions/staging-outlier2-aggregation scaled deployment.extensions/staging-pa-alert scaled deployment.extensions/staging-pa-core scaled deployment.extensions/staging-parquet-store scaled deployment.extensions/staging-pipeline-config scaled deployment.extensions/staging-recommendation scaled deployment.extensions/staging-reports scaled deployment.extensions/staging-reports-runner scaled deployment.extensions/staging-risk-engine scaled deployment.extensions/staging-risk-register scaled deployment.extensions/staging-risk-threats scaled deployment.extensions/staging-scheduler scaled deployment.extensions/staging-ssh-service scaled deployment.extensions/staging-streams scaled deployment.extensions/staging-tenant-user scaled deployment.extensions/staging-ticketing scaled Successfully Stopped Guardium Insights microservices
- Shut down the Db2® instance.
- Exec into the Db2 Warehouse pod by running the following command:
oc exec -ti staging-ibm-db2u-db2u-0 /bin/bash
- Temporarily disable the built-in High Availability (HA):
sudo wvcli system disable -m "Disable HA before Db2 maintenance"
- Become the Db2 instance owner:
su - ${DB2INSTANCE}
- Stop the database with the
db2stop
command. - To verify that all Db2 interprocess communications are cleaned, run the following
command:
ipclean
- Exec into the Db2 Warehouse pod by running the following command:
- Pause the
db2u
pod by creating a temporarily renamed backup copy of it and then removing it:- To determine the
ICP4Data
(database-db2wh
) worker node, run the following command:oc get nodes -Licp4data
This command returns a list of nodes similar to this example, where you locate the node with the
database-db2wh
label:NAME STATUS ROLES AGE VERSION ICP4DATA master0.myenv.ibm.com Ready master 26d v1.16.2 master1.myenv.ibm.com Ready master 26d v1.16.2 master2.myenv.ibm.com Ready master 26d v1.16.2 worker0.myenv.ibm.com Ready cp-management,cp-master,cp-proxy,worker 26d v1.16.2 worker1.myenv.ibm.com Ready worker 26d v1.16.2 worker2.myenv.ibm.com Ready worker 25d v1.16.2 database-db2wh
In this example,
worker2.myenv.ibm.com
is thedatabase-db2wh
node.Note: Make note of the node that you change (for example,worker2.myenv.ibm.com
) because you need to change the label back when you restart Guardium Data Security Center. - Temporarily change the label by running the following command:
oc edit node <database-db2wh_node>
where
<database-db2wh_node>
is theICP4Data
node (for example,worker2.myenv.ibm.com
).Scroll to
labels
and then toicp4data
, and change the value to any temporary value. - Delete the
db2u
pod by issuing these commands:oc delete pod staging-ibm-db2u-db2u-0 oc delete pod staging-ibm-db2u-etcd-0 (delete 0-2) oc delete pod staging-ibm-db2u-db2u-ldap-6bd6b58ccd-p2957 oc delete pod staging-ibm-db2u-db2u-tools-7d7dcfdd8f-lfxrg
After you run these commands, if you run theoc get pods
command, thedb2u
pod is listed inPending
state.
- To determine the
- To scale down the Kafka
and Zookeeper pods, scale down the replicas in the events
operator:
- Switch to the
ibm-common-services
namespace:oc project ibm-common-services
- To search for the events operator, issue this
command:
oc get pods
In the returned output, locate the operator, for example:ibm-events-operator-v3.7.1 1/1 1 1 112d
- Edit the replicas and set the operator to running by
issuing this command:
oc edit deployment ibm-events-operator-v3.7.1
- Scroll to
spec.replicas
and set it to 0 (zero).Locate the following part of the file:uid: 0481226f-52ea-4d39-aee0-bea37ab3fab6 spec: progressDeadlineSeconds: 600 replicas: 1 revisionHistoryLimit: 1 selector:
And set
replicas: 1
toreplicas: 0
. - Switch back to the Guardium Data Security Center
OpenShift namespace:
oc project <guardium_data_security_center_namespace>
- Retrieve the Security Token Service (STS) list and then delete the Zookeeper and Kafka
stateful sets:
oc get sts oc delete sts staging-zookeeper oc delete sts staging-kafka
- To verify that the Kafka pods are shut
down, run the
oc get sts
command and then verify that the Zookeeper and Kafka stateful sets are gone. In addition, issue theoc get pods
command and verify that thestaging-kafka-0
andstaging-zooker-0 pods
pods are gone.
- Switch to the
- Determine all current replicas by issuing this command:
oc get sts -lapp.kubernetes.io/instance=$<guardium_data_security_center_namespace>
The output is similar to the following example:
c-staging-db2-db2u 0/1 7h49m c-staging-db2-etcd 0/1 7h49m c-staging-redis-m 3/3 7h54m c-staging-redis-s 3/3 7h53m staging-mongodb 1/1 7h54m stagingvva6nkxhz6jgylyqydhxkv-mini-snif 0/0 7h28m
- Determine the stateful set:
oc get sts
The output is similar to the following example:
NAME READY AGE c-staging-db2-db2u 0/1 7h49m c-staging-db2-etcd 0/1 7h49m c-staging-redis-m 3/3 7h54m c-staging-redis-s 3/3 7h53m staging-mongodb 1/1 7h54m stagingvva6nkxhz6jgylyqydhxkv-mini-snif 0/0 7h28m
- Scale down all replicas except the
db2u
stateful set:- To see all current replicas in your Guardium Data Security Center
OpenShift namespace, run the following command:
oc get sts -lrelease=<guardium_data_security_center_namespace>
The output is similar to the following example:
bitnami-zookeeper 3/3 122m staging-ibm-db2u-db2u 0/1 122m staging-ibm-db2u-etcd 2/3 122m staging-ibm-redis-sentinel 3/3 122m staging-ibm-redis-server 2/2 122m staging-kafka 3/3 122m staging-mongodb-arbiter 1/1 122m staging-mongodb-primary 1/1 122m staging-mongodb-secondary 1/1 122m
For example,
staging-ibm-db2u-db2u
has 1 replica, andstaging-ibm-db2u-etcd
has 3 replicas.Important: Make note of the results of the command. Even though you scale down the number of replicas to 0 (zero), when you restart Guardium Data Security Center, you must scale the replicas back to the original numbers that you see in this output. - Scale down each stateful set (except
staging-ibm-db2u-db2u
andstaging-ibm-db2u-etcd
) individually by running this command:oc scale sts <statefulset> --replicas=0
where
<statefulset>
is each of the preceding stateful sets. Using the preceding example, run the following commands:oc scale sts bitnami-zookeeper --replicas=0 oc scale sts staging-ibm-redis-sentinel --replicas=0 oc scale sts staging-ibm-redis-server --replicas=0 oc scale sts staging-kafka --replicas=0 oc scale sts staging-mongodb-arbiter --replicas=0 oc scale sts staging-mongodb-primary --replicas=0 oc scale sts staging-mongodb-secondary --replicas=0
- To see all current replicas in your Guardium Data Security Center
OpenShift namespace, run the following command:
Results
Guardium Data Security Center is shut down. To restart, follow the instructions in Manually restarting Guardium Data Security Center.
What to do next
To verify that no Guardium Data Security Center pods are running, run the following command again:
oc get sts -lrelease=<guardium_data_security_center_namespace>
A successful output shows no pods in a Running
state, and the
db2u
pods are listed in Pending
state.