Limitations and known issues for Db2 Big SQL
The following limitations and known issues apply to Db2 Big SQL.
Unable to provision a Db2 Big
SQL
instance because the c-bigsql-<instanceid>-db2u-restore-morph
job fails
Applies to: 4.6.4
When you provision a Db2 Big
SQL service instance, the
c-bigsql-<instanceid>-db2u-restore-morph
job might fail repeatedly, and the Db2 Big
SQL instance is not created.
- Symptoms
-
- The
c-bigsql-<instanceid>-db2u-restore-morph
job keeps failing during Db2 Big SQL provisioning and no Db2 Big SQL instance is created. - The standard output logs for the
c-bigsql-<instanceid>-db2u-restore-morph
pod show an error message similar to the following example:+ echo 'Running db2start' Running db2start + [[ 0 -le 1 ]] + RC=0 + db2start <timestamp> 0 0 SQL0901N The SQL statement or command failed because of a database system error. (Reason "".) SQL1032N No start database manager command was issued. SQLSTATE=57019 + sleep 5 ++ db2gcf -s ++ awk -F : '{print $2}' ++ sed 's/^ *//; s/ *$//; /^$/d' + db2state=Operable + [[ Operable == \A\v\a\i\l\a\b\l\e ]] + echo '* ERROR: Failed to start Db2 successfully' * ERROR: Failed to start Db2 successfully + db2_kill -all db2nkill: DB2 member 0 with PID 4294967295 does not exist. + rah ipclean -a Application ipclean: Removing all IPC resources for db2inst1(500) c-bigsql-<instanceid>-db2u-0.c-bigsql-<instanceid>-db2u-internal: ipclean -a completed ok + RC=1 + (( n++ )) + [[ 1 -le 1 ]] + RC=0 + db2start <timestamp> 0 0 SQL0901N The SQL statement or command failed because of a database system error. (Reason "".) SQL1032N No start database manager command was issued. SQLSTATE=57019 + sleep 5 ++ db2gcf -s ++ awk -F : '{print $2}' ++ sed 's/^ *//; s/ *$//; /^$/d' + db2state=Operable + [[ Operable == \A\v\a\i\l\a\b\l\e ]] + echo '* ERROR: Failed to start Db2 successfully' * ERROR: Failed to start Db2 successfully + db2_kill -all db2nkill: DB2 member 0 with PID 4294967295 does not exist. + rah ipclean -a Application ipclean: Removing all IPC resources for db2inst1(500) c-bigsql-<instanceid>-db2u-0.c-bigsql-<instanceid>-db2u-internal: ipclean -a completed ok + RC=1 + (( n++ )) + [[ 2 -le 1 ]] + return 1 + echo 'Failed to start Db2' Failed to start Db2 + return 1 + echo '* ERROR (Pre Restore Stage): Failed to successfully set registry variables. Failed at pre restore stage.' * ERROR (Pre Restore Stage): Failed to successfully set registry variables. Failed at pre restore stage. + exit 1 + return 1 + rc=1 + log '' 'Completed /db2u/db2u_restore_morph.sh with RC code 1. Total Elapsed Time: 41' + fun= + msg='Completed /db2u/db2u_restore_morph.sh with RC code 1. Total Elapsed Time: 41' + LINE=53 ++ date +%F.%T.%N + printf '%.23s @ Line(%-5.5d) @ Fun %32.32s: ' 2023-04-03.21:48:24.303556217 53 '' + echo Completed /db2u/db2u_restore_morph.sh with RC code 1. Total Elapsed Time: 41 2023-04-03.21:48:24.303 @ Line(00053) @ Fun : + exit 1 Completed /db2u/db2u_restore_morph.sh with RC code 1. Total Elapsed Time: 41 command terminated with exit code 1 + exit 1
-
When you run the following command, you get the output
Global instance memory (% or 4KB) (INSTANCE_MEMORY) = 80
.Note: To get the Db2 Big SQL instance ID, runoc get cm -l component=db2bigsql -o custom-columns="Instance Id:{.data.instance_id},Instance Name:{.data.instance_name},Created:{.metadata.creationTimestamp}"
oc exec -it c-bigsql-<instanceid>-db2u-0 -- bash -c "su - db2inst1 -c 'db2 get dbm cfg' | grep -i INSTANCE_MEMORY"
-
When you run the following command, only the Db2 Big SQL head and worker pods (
c-bigsql-<instanceid>-db2u-x
) and the failedc-bigsql-<instanceid>-restore-morph
job pods are shown.oc get pods | grep -i c-bigsql-<instanceid>-db2u
- The
- Workaround
- To resolve the issue, decrease the Db2
instance memory.
- Decrease the Db2 instance memory setting
in the Db2 Big
SQL head
pod.
oc rsh c-bigsql-<instanceid>-db2u-0 bash su - db2inst1 db2 update dbm cfg using instance_memory 2000000
-
Save a copy of the
c-bigsql-<instanceid>-restore-morph
job YAML file.oc get job c-bigsql-<instanceid>-restore-morph -o yaml | grep -v controller-uid > c-bigsql-<instanceid>-restore-morph.yaml
-
Delete the existing
c-bigsql-<instanceid>-restore-morph
job from the OpenShift® cluster.oc delete job c-bigsql-<instanceid>-restore-morph
- Recreate the
c-bigsql-<instanceid>-restore-morph
job based on the YAML file from step 2.oc project ${PROJECT_CPD_INSTANCE} oc apply -f c-bigsql-<instanceid>-restore-morph.yaml
Important: Make sure that the project (namespace) that is stored in the c-bigsql-<instanceid>-restore-morph.yaml file is $PROJECT_CPD_INSTANCE.After step 4, the
c-bigsql-<instanceid>-restore-morph
job pod is created.
If the issue is resolved, the
c-bigsql-<instanceid>-restore-morph
job pod finishes successfully, and more Db2 Big SQL pods are created. - Decrease the Db2 instance memory setting
in the Db2 Big
SQL head
pod.
Unable to query some tables after upgrading Cloud Pak for Data 4.0.x to 4.5.x or 4.6.0
Applies to: 4.6.0
Fixed in: 4.6.2
Queries on tables that contain certain data types can fail. You see an error in the bigsql.log file like the following example:
com.thirdparty.cimp.catalog.TableLoadingException: The table definition statement failed because some
functionality was specified in the table definition that is not supported with the table type. Unsupported
functionality: "\"timestamp(6)\" DATATYPE".
This problem occurs with tables that contain the following data types:
- Columns where the data type is
timestamp
with a fixed precision. For example,timestamp(3)
. - Columns where the data type is
decimal
, but no precision or scale is specified. For example,decimal
ordecimal(10)
. - Columns that are a complex data type. For example,
map
,array
, orstruct
.
This problem applies only to Db2 Big SQL instances in Cloud Pak for Data 4.0.x that are upgraded to Cloud Pak for Data 4.5.x or 4.6.0. Instances that are created in Cloud Pak for Data 4.5.x or upgraded to Cloud Pak for Data 4.6.2 or later are not affected.
For more information about the problem and how to resolve it, see the technote Unable to query some tables after upgrading Db2 Big SQL on Cloud Pak for Data 4.0.x to 4.5.x or 4.6.0.
INSERT operations on partitioned tables fail with an SQL5105N error
Applies to: 4.6.0
Fixed in: 4.6.2
INSERT operations on partitioned tables that contain a null value for a partitioning column might fail with an SQL5105N error. The detailed log entry shows an exception that is raised by the writer process.
To work around this problem, avoid inserting rows with null values in PARTITIONED BY columns.
Limitations
- HBase tables are not supported.
- Transactional tables are not supported.
- Tables that are created in Db2 Big SQL are not propagated to Apache Atlas if Atlas is installed on the remote Cloudera Data Platform (CDP) Private Cloud Base cluster.
- The LOAD command is not supported.