Recently, I encountered an interesting case where customer were complaining of long compile times .. some times as long as 35 minutes !!.
Generally, when I start looking at such an issue, the first thing which comes to mind is that probably its some very complicated query and optimizer is taking some time to evaluate the plans etc. but still 35 mins is too long even for the optimizer to evaluate all the available plans to it. Also, later on I found that the query is a very simple DELETE on a table with only one where condition. Since, the issue was reproducible, I asked customer to collect db2trc when the compile was running .. so the steps were:
1) db2 connect to <dbname>
2) db2 set current explain mode explain
3) db2trc on -i 1g -t
4) db2 -tvf query.sql
The trace revealed an interesting thing .. it seemed that the optimizer was spending a lot of time fabricating statistics for each partition of the table. On further investigation it was found that the table was range partitioned and had 3800 partitions !!. On further, investigation, it was found that the optimizer was trying to fabricate the statistics for each of the "empty" data partitions. The table had nearly all of the partitions empty .. :-). For each partition, DB2 was taking approx. 300-400ms to get the statistics as it had to get the data from disk every time. This added up nicely to the total time taken for the compile of the query.
The customer then re-architected their data model to reduce the number of empty partitions to a minimum and the problem went away.
So, I guess the moral of the story is that you can surely have a lot of data partitions for a table, but just make sure the number of empty partitions are kept to a minimum to enable quick compiles of queries to the table.