Investigating problems that are encountered during interactions with an object store (Db2 Big SQL)
You might encounter unexpected behavior when Db2® Big SQL is interacting with an object store.
Symptoms
Db2 Big SQL does not interact with the object store as expected; for example, attempting to create a table fails with a timeout from the object store.Resolving the problem
To resolve such problems, complete the following steps:
- Review any Db2 Big SQL errors by using the SYSHADOOP.LOG_ENTRY table function.
- Perform some basic investigations by using the HDFS S3a Connector, through which Db2 Big SQL interacts with the object store.
- Log in to your OpenShift® cluster as a project
administrator:
oc login <OpenShift_URL>:<port>
- Change to the project where the Cloud Pak for Data control plane is
installed:
oc project ${PROJECT_CPD_INST_OPERANDS}
- Identify the Db2 Big SQL instance
ID:
oc get cm -l component=db2bigsql -o custom-columns="Instance Id:{.data.instance_id},Instance Name:{.data.instance_name},Created:{.metadata.creationTimestamp}"
- Get the name of the Db2 Big SQL head
pod:
head_pod=$(oc get pod -l app=bigsql-<instance_id>,name=dashmpp-head-0 --no-headers=true -o=custom-columns=NAME:.metadata.name)
- To determine whether the connection to the object store is working, run the following command to
list the files in the object store bucket (in these examples,
testceph
):oc exec -i $head_pod -- sudo su - db2inst1 -c 'hdfs dfs -ls s3a://testceph/*'
- To investigate the SSL handshake with the object store, run the following command. This can be
particularly useful when you are investigating connections to an SSL-enabled on-premises object
store.
oc exec -i $head_pod -- sudo su - db2inst1 -c 'export HADOOP_OPTS="-Djavax.net.debug=ssl:handshake:verbose";hdfs dfs -ls s3a://testceph/*'
- To investigate the entire network connection with the object store, run the following
command:
oc exec -i $head_pod -- sudo su - db2inst1 -c 'export HADOOP_OPTS="-Djavax.net.debug=all"; hdfs dfs -ls s3a://testceph/*'
- Log in to your OpenShift® cluster as a project
administrator:
For more information, see Enabling low-level logging.