Table of contents

Enabling database auditing for Db2 Big SQL

You can enable Db2®-level auditing for Db2 Big SQL on Cloud Pak for Data.

Procedure

To enable database auditing, complete the following steps:
  1. Log in to your OpenShift® cluster as a project administrator:
    oc login OpenShift_URL:port
  2. Change to the project where the Cloud Pak for Data control plane is installed:
    oc project Project
  3. Identify the Db2 Big SQL instance ID.
    oc get cm -l component=db2bigsql -o custom-columns="Instance Id:{.data.instance_id},Instance Name:{.data.instance_name},Created:{.metadata.creationTimestamp}"
  4. Get the name of the Db2 Big SQL head pod:
    head_pod=$(oc get pod -l app=bigsql-<instance_id>,name=dashmpp-head-0 --no-headers=true -o=custom-columns=NAME:.metadata.name)
  5. Run the following commands to enable auditing:
    oc exec $head_pod -- sudo su - db2inst1 -c 'mkdir -p /mnt/audit/archive; db2audit configure datapath /mnt/audit archivepath /mnt/audit/archive'
  6. To complete the configuration for auditing, navigate to Configuring and enabling auditing using the Db2 audit facility and complete steps 1c onwards.
  7. (Optional) To keep the PV directory from filling up, you can move the auditing files from the PV directory to a connected object store. To do this, run the following commands, replacing bucketname with your actual bucket name:
    oc exec $head_pod -- sudo su - db2inst1 -c 'hdfs dfs -mkdir -p s3a://bucketname/audit/archive'
    oc exec $head_pod -- sudo su - db2inst1 -c 'hdfs dfs -cp /mnt/audit/archive/* s3a://bucketname/audit/archive'