Topic
IC4NOTICE: developerWorks Community will be offline May 29-30, 2015 while we upgrade to the latest version of IBM Connections. For more information, read our upgrade FAQ.
2 replies Latest Post - ‏2013-05-15T20:40:25Z by suzanneb
mxt122
mxt122
1 Post
ACCEPTED ANSWER

Pinned topic DB2 compression on z/os

‏2011-05-03T13:36:51Z |
Hi,
I try to find out what is the industry tendency in term of data compression. In my organisation we started to convert old IDMS databases to DB2 and at the time compression on IDMS what used quiet often mainly to save disk (DASD)space. Now that DASD is less expensive in our days, I was wondering if compression should be consider when converting from IDMS to DB2. I try to measure if there're avantages for my organisation to use db2 compression if we take in consideration compression/decompression of the data, I/O performance, and so on.

I know the usage of compression can depend on several factors but the main question is to know if DB2 compression is used by the industry still today. If you have any articles on the subject please let me know.

Thanks
Marco
Updated on 2011-05-19T18:18:10Z at 2011-05-19T18:18:10Z by SystemAdmin
  • SystemAdmin
    SystemAdmin
    3105 Posts
    ACCEPTED ANSWER

    Re: DB2 compression on z/os

    ‏2011-05-19T18:18:10Z  in response to mxt122
    Marco, ever since native compression was introduced in DB2 several releases ago, every installation I have worked at has used compression extensively. Not only DASD but I/O savings are possible, because retrieving a single (well-compressed) page gets you more rows of data than a non-compressed page, and physical I/O is usually the biggest performance drag in DB2.

    Couple of notes, since you're looking at the topic:

    1) LOAD REPLACE and REORG are how you create compression dictionaries. Generally, REORG will produce a more optimized dictionary. If you're doing an initial load of data, use LOAD REPLACE; then REORG to reduce space at your leisure. If you populate the data by other methods (e.g. LOAD RESUME or INSERT), wait until you have several megabytes of data before you REORG (rule of thumb)

    2) The longer the row, the better the likely compression result. Rows shorter than about 35 bytes do not, in my experience, yield good compression. ("Good" is defined as PAGESAVE on the partition being better than about 15%. It's a rule-of-thumb again, other analysts will use different numbers.)

    3) A compression dictionary takes up about 16 4K pages in a 4K partition (One compression dictionary per nonpartitioned tablespace, one compression dictionary per partition if the tablespace is partitioned). If your table isn't going to be bigger than 64K raw, don't compress.

    4) Multi-table tablespaces generally should not be compressed. This type of tablespace generally contains a lot of small reference tables, not large ones; you wind up trying to decompress rows from tables which are not represented in the compression dictionary. If you need to enhance performance for this tablespace, put it in a virtual bufferpool which is sized to hold all of its pages in memory, then PAGEFIX the bufferpool (an option in the -ALTER BUFFERPOOL DB2 command).

    That's all that comes to mind at the moment. Hope this helps.

    --Phil Sevetson
    • suzanneb
      suzanneb
      1 Post
      ACCEPTED ANSWER

      Re: DB2 compression on z/os

      ‏2013-05-15T20:40:25Z  in response to SystemAdmin

      Hi,

          Was hoping there was a column in the db2 catalogue identifying if the compression dictionary was built.

      We are trying to automate a maintenance process where we use "load resume' into a compressed partitioned TS constantly and would like to determine if the partition requires a reorg shortly after the load begins to build the compression dictionary. 

      We are looking for a way to identify the if a partition has a compression dictionary built through the catalogue information.  Is that possible? or only by using DSN1COMP?

      thanks for any info, Suzanne Bradley