Investigating problems that are encountered during interactions with an object store (Db2 Big SQL)
You might encounter unexpected behavior when Db2® Big SQL is interacting with an object store.
Symptoms
Db2 Big SQL does not interact with the object store as expected; for example, attempting to create a table fails with a timeout from the object store.Resolving the problem
To resolve such problems, complete the following steps:
- Review any Db2 Big SQL errors by using the SYSHADOOP.LOG_ENTRY table function.
- Perform some basic investigations by using the HDFS S3a Connector, through which Db2 Big SQL interacts with the object store.
- Log in to your OpenShift® cluster as a project
administrator:
oc login OpenShift_URL:port
- Change to the project where the Cloud Pak for Data control plane is
installed:
oc project Project
- Create a
cpd-cli
profile.To manage Db2 Big SQL instances (by running the
cpd-cli service-instance
command), you must create acpd-cli
profile. The profile must be set up with the identity of a user that was granted access to the Db2 Big SQL instance. For more information about creating acpd-cli
profile, see Creating a profile to use the cpd-cli management commands. - Identify the Db2 Big SQL instance
ID.
./cpd-cli service-instance list --service-type bigsql --profile <profile-name>
- Get the name of the Db2 Big SQL head
pod:
head_pod=$(oc get pod -l instance=<instanceid>,app=db2-bigsql,bigsql-node-role=head --no-headers=true -o=custom-columns=NAME:.metadata.name)
- To determine whether the connection to the object store is working, run the following command to
list the files in the object store bucket (in these examples,
testceph
):oc exec -i $head_pod -- bash -c 'source $HOME/.profile; hdfs dfs -ls s3a://testceph/*'
- To investigate the SSL handshake with the object store, run the following command. This can be
particularly useful when you are investigating connections to an SSL-enabled on-premises object
store.
oc exec -i $head_pod -- bash -c 'source $HOME/.profile; export HADOOP_OPTS="-Djavax.net.debug=ssl:handshake:verbose"; hdfs dfs -ls s3a://testceph/*'
- To investigate the entire network connection with the object store, run the following
command:
oc exec -i $head_pod -- bash -c 'source $HOME/.profile; export HADOOP_OPTS="-Djavax.net.debug=all"; hdfs dfs -ls s3a://testceph/*'
- To investigate general problems with interactions through the Apache s3a connector, run the
following
command:
oc exec -i $head_pod -- bash -c 'source $HOME/.profile; cp /etc/hadoop/conf/log4j.properties /tmp/log4j.s3a; echo "log4j.logger.org.apache.hadoop.fs.s3a=DEBUG" >> /tmp/log4j.s3a; export HADOOP_OPTS="-Dlog4j.configuration=file:/tmp/log4j.s3a"; hdfs dfs -ls s3a://testceph/*'
- Log in to your OpenShift® cluster as a project
administrator:
For more information, see Enabling low-level logging.