Investigating problems that are encountered in Db2 Big SQL during interactions with an object store

You might encounter unexpected behavior when Db2 Big SQL is interacting with an object store.

Symptoms

Db2 Big SQL does not interact with the object store as expected; for example, attempting to create a table fails with a timeout from the object store.

Resolving the problem

To resolve such problems, complete the following steps:

  1. Review any Db2 Big SQL errors by using the SYSHADOOP.LOG_ENTRY table function.
  2. Perform some basic investigations by using the HDFS S3a Connector, through which Db2 Big SQL interacts with the object store.
    1. Log in to your OpenShift® cluster as a project administrator:
      oc login <OpenShift_URL>:<port>
    2. Change to the project where the Cloud Pak for Data control plane is installed:
      oc project ${PROJECT_CPD_INST_OPERANDS}
    3. Identify the Db2 Big SQL instance ID:
      oc get cm -l component=db2bigsql -o custom-columns="Instance Id:{.data.instance_id},Instance Name:{.data.instance_name},Created:{.metadata.creationTimestamp}"
    4. Get the name of the Db2 Big SQL head pod:
      head_pod=$(oc get pod -l app=bigsql-<instance_id>,name=dashmpp-head-0 --no-headers=true -o=custom-columns=NAME:.metadata.name)
    5. To determine whether the connection to the object store is working, run the following command to list the files in the object store bucket (in these examples, testceph):
      oc exec -i $head_pod -- sudo su - db2inst1 -c 'hdfs dfs -ls s3a://testceph/*'
    6. To investigate the SSL handshake with the object store, run the following command. This can be particularly useful when you are investigating connections to an SSL-enabled on-premises object store.
      oc exec -i $head_pod -- sudo su - db2inst1 -c 'export HADOOP_OPTS="-Djavax.net.debug=ssl:handshake:verbose";hdfs dfs -ls s3a://testceph/*'
    7. To investigate the entire network connection with the object store, run the following command:
      oc exec -i $head_pod -- sudo su - db2inst1 -c 'export HADOOP_OPTS="-Djavax.net.debug=all"; hdfs dfs -ls s3a://testceph/*'

For more information, see Enabling low-level logging.