You can enable Db2-level auditing
for Db2 Big SQL on IBM® Software Hub.
About this task
Best practice: You can run the commands in
this task exactly as written if you set up environment variables. For instructions, see
Setting up
installation environment variables.
Ensure
that you source the environment variables before you run the commands in this task.
Procedure
- Log in to Red Hat®
OpenShift® Container Platform as an instance
administrator.
${OC_LOGIN}
Remember:
OC_LOGIN is an alias for the oc login
command.
- Change to the project where the IBM Software Hub
control plane is installed:
oc project ${PROJECT_CPD_INST_OPERANDS}
- Identify the Db2 Big SQL instance ID:
oc get cm -l component=db2bigsql -o custom-columns="Instance Id:{.data.instance_id},Instance Name:{.data.instance_name},Created:{.metadata.creationTimestamp}"
-
Get the name of the Db2 Big SQL head
pod:
head_pod=$(oc get pod -l app=bigsql-<instance_id>,name=dashmpp-head-0 --no-headers=true -o=custom-columns=NAME:.metadata.name)
- Enable auditing by running the following command:
oc exec $head_pod -- sudo su - db2inst1 -c "mkdir -p /mnt/blumeta0/bigsql/audit/archive && db2audit configure datapath /mnt/blumeta0/bigsql/audit archivepath /mnt/blumeta0/bigsql/audit/archive"
- To complete the configuration for auditing, navigate to Configuring and enabling auditing using the Db2 audit facility and complete steps 1c
onwards.
- Optional: To keep the PV directory from filling up, move
the auditing files from the PV directory to a connected object store.
Run the following commands, replacing bucketname with your actual bucket
name:
oc exec $head_pod -- sudo su - db2inst1 -c 'hdfs dfs -mkdir -p s3a://bucketname/audit/archive'
oc exec $head_pod -- sudo su - db2inst1 -c 'hdfs dfs -cp /mnt/audit/archive/* s3a://bucketname/audit/archive'