Performance impact after increasing columns varchar size
44WR_Marcin_Baster 27000444WR Visits (9999)
We have an issue, where one of our larger tables has a column which is too short. Some of the data that is coming is up to 1024 characters. To fix this we would have to alter the column from 192 to 1024 via something like: ALTER TABLE MYTAB ALTER COLUMN MYCOL SET DATA TYPE VARCHAR(1024) . The question are:\
1) given this table can be quite large, such as 50 million rows, do you think increasing the size of this column will have any noticeable performance impact? I do not think this particular column is valued in many cases.
So there will be performance impact - imagine simple situation when DB is stopped. Once you start database, bufferpools are empty. So in order to do select on that table we need to read data from the disk. In DB2 we store data in pages (ex 16K), and read via extents (ex 32 pages at one singe IO).
So before alter on that extent you have given number of rows (lets assume that 100 rows per extent). So in order to do full select you need to read all extents for that table. However, once you alter, the space consumed by single row will increase (so lets assume that single extent will contain only 80 rows). As you can see, we then need more extents (pages) to store entire data. Consequence is obvious - more time to read.
Side note 2:
While above is in DB2 language, please note simple explanation: now your table is at given size (lets say 100GB). Once you alter, it will be 110GB. So your questions is as simple as this: how much more time I need in my storage to provide more data?
2) if we alter the table column size, do we need to rebu
Recommendation here is to REORG table along with indexes. Note that REORG of indexes will effectively rebuild those. So you need to execute something like:
db2 reorg table ... db2 reorg indexes all for table ...