SELECT or INSERT fails when run concurrently with DROP PARTITION

The SELECT or INSERT statement might fail with [SQLCODE=-5105] [SQLSTATE=58040] when it runs concurrently with the DROP PARTITION statement on the same table.

For INSERT, the scheduler might fail to load metadata for files that are being deleted by the concurrent DDL.

For SELECT, the scheduler might fail to load metadata for, or the readers might fail to scan, files that are being deleted by the concurrent DDL.

Symptoms

You might see the following type of error in the bigsql-sched.log on the scheduler host:
The statement failed because a Big SQL component encountered an error. 
Component receiving the error: "SCHEDULER". Component returning the error: "FRONT-END". 
Log entry identifier: "[SCL-0-zqml3421a]".. SQLCODE=-5105, SQLSTATE=58040
along with a failure in the metadata load operation:
ERROR 
com.ibm.biginsights.bigsql.scheduler.server.cache.DescriptorTableCache [pool-1-thread-8793] : 
IncompleteTable at cause level1 for schemaName=<SCHEMA_NAME> tableName=<TABLE_NAME>.
com.thirdparty.cimp.catalog.TableLoadingException: Failed to load metadata for table: <TABLE_NAME>
You might see the following type of error in the bigsql.log on the worker hosts:
The statement failed because a Big SQL component encountered an error. 
Component receiving the error: "BigSQL IO". Component returning the error: "UNKNOWN". 
Log entry identifier: "[BSL-1-15k27t217]".  SQLSTATE=58040
along with a File does not exist error:
ERROR 
com.ibm.biginsights.bigsql.dfsrw.reader.DfsBaseReader [Master-1-S:12.1001.4.0.0.-28151] : [BSL-1-15k27t217] 
Exception raised by Reader at node: 1 Scan ID: S:12.1001.4.0.0.-28151 Table: <TABLE_NAME> 
Spark: false VORC: false VPQ: true VAVRO: false VTEXT: false VRCFILE: false VANALYZE: false
Exception Label: UNMAPPED(java.io.FileNotFoundException: File does not exist: <FILE_PATH>

Causes

Because the DROP PARTITION statement is passed through to Hive, which uses its own internal locking, the statement does not block for ongoing DML, like SELECT and INSERT, which might depend on the affected data.

Resolving the problem

Avoid running the DROP PARTITION statement against tables that have ongoing SELECT or INSERT operations.

If a SELECT or INSERT statement fails due to a DROP PARTITION operation, retry the failing statement.