Limitations and known issues for Db2 Big SQL
The following limitations and known issues apply to Db2 Big SQL.
- Db2 Big SQL pods are unschedulable due to insufficient resources.
- You are able to specify
large values for the number of cores and memory in the Db2 Big SQL provisioning UI and to save these values. The
Db2 Big SQL UI does not reflect the
unschedulable pods, but the OC console does.To work around the problem, use
oc
ork8s
commands to set acceptable resource values. For example:oc set resources deployment bigsql-<instance-id>-head --limits=cpu=4,memory=16Gi --requests=cpu=4,memory=16Gi oc set resources statefulset bigsql-<instance-id>-worker --limits=cpu=4,memory=16Gi --requests=cpu=4,memory=16Gi
- Applies to: 7.1.1 and later
- Connecting your JSqsh client to the Db2 Big SQL server with JSON web token (JWT)
authentication that uses
securityMechanism=15
andpluginName=IBMIAMauth
is currently not supported. - A workaround is to configure JWT
authentication by using
securityMechanism=19
andaccessTokenType=jwt
.- Applies to: 7.1.1 and later
- HBase tables are not supported.
-
- Applies to: 7.1.1 and later
- Transactional tables are not supported.
-
- Applies to: 7.1.1 and later
- Tables that are created in Db2 Big SQL will not be propagated to Apache Atlas, if Atlas is installed on the remote Hortonworks Data Platform (HDP) or Cloudera Data Platform - Data Center (CDP-DC) cluster.
-
- Applies to: 7.1.1 and later
- Some data types (such as TIMESTAMP) in some file formats (such as ORC, Parquet, AVRO, and so on) that are used by Db2 Big SQL might not be compatible with older versions of Hive.
- Db2 Big SQL uses more recent versions of those file
formats. Consequently, data that is written to HDP 2.6 clusters through Db2 Big SQL INSERT statements can be read by Db2 Big SQL, but might not be readable by Hive engines on
target clusters.
- Applies to: 7.1.1 and later
- The LOAD command is not supported.
-
- Applies to: 7.1.1 and later