I wonder if the DB2 applies any compression by default, without using
either row or value compression ?
Here is one thing which puzzle me, I'm trying to do an estimation (database sizing) based on some of our tables which are expected to grow the most (these are constantly inserted - every 5 min or more often with rows, but data is also deleted periodically eg. every 2 week, 1 month, 3 months, these are all factored into my formula).
Using either the :
- NEW - IBM data studio client (it shows the average row size, eg. 200 bytes)
- OLD - DB2 Control Center (it shows both average row size, but it can also show table size on disk)
I could determine roughly the size of my entire DB, but based on a simple calculus of "average row size" X "number of rows" to calculate the size of a table (one of the largest one), this does not match up with the size db2 control center is returning.
On the other hand, using get_dbsize_info()
eg. db2 => call get_dbsize_info(?,?,?,0)
the size comes out to be much much smaller, about 20% of my calculus. Based on my calculation (worse case scenario) the DB size should be around 500GB, while using get_dbsize_info() it comes out only 120GB.
So what can be wrong ? My first thought is that there might be some compression (especially for many zero value inserts - value compression?) going on by default, but i checked with our appplication DB admin, the tables are not created with any compression.
Is the data studio client unreliable ?
Is the db2 control center unreliable ?
Am i miss calculating something in my formula so badly ?
NOTE : We are currently using DB2 9.7 fp4 (+ newer FP levels) and DB2 10.1.fp2 (+ newer FP levels).
Any feedback is greatly appreciated ! Thanks