APAR status
Closed as fixed if next.
Error description
Several operations in Big SQL trigger the generation of scratch hive directories under the hdfs path /tmp/hive/bigsql but the directories are not deleted at the end of the operations. Some examples of operations that cause this are: - Catalog sinchronization using HCAT_SYNC_OBJECTS - ALTER TABLE ... ADD PARTITION on a Hadoop table Normally this has no other effect other than leaving the directories in HDFS, however if the number of the directories reaches the limit set in HDFS for dfs.namenode.fs-limits.max-directory-items which by default is 1048576, any action requiring a scratch directory will fail with the error The directory item limit of /tmp/hive/bigsql is exceeded: limit=1048576 items=1048576 at which point the directories under /tmp/hive/bigsql will need to be deleted manually to restore the funcionality of Big SQL. This problem may be seen starting from version 4.2.5.0 of Big SQL.
Local fix
Schedule a cron job to remove old scratch dirs under /tmp/hive/bigsql
Problem summary
Problem is fixed in IBM Db2 Big SQL version 6.0.0.0
Problem conclusion
Temporary fix
Comments
APAR Information
APAR number
PH03850
Reported component name
IBM BIG SQL
Reported component ID
5737E7400
Reported release
503
Status
CLOSED FIN
PE
NoPE
HIPER
NoHIPER
Special Attention
NoSpecatt / Xsystem
Submitted date
2018-10-09
Closed date
2019-03-27
Last modified date
2019-03-27
APAR is sysrouted FROM one or more of the following:
APAR is sysrouted TO one or more of the following:
Fix information
Applicable component levels
[{"Business Unit":{"code":"BU059","label":"IBM Software w\/o TPS"},"Product":{"code":"SSCRJT","label":"IBM Db2 Big SQL"},"Component":"","ARM Category":[],"Platform":[{"code":"PF025","label":"Platform Independent"}],"Version":"503","Edition":"","Line of Business":{"code":"LOB10","label":"Data and AI"}}]
Document Information
Modified date:
08 April 2021