agburke 060001QPDN Visits (604)
We have enhanced the storage request processing for distributed threads to avoid a bottleneck condition that can occur in the DDF communication buffer storage. 04E abends due to virtual storage constraints could be seen in V9 and V10 in some customer environments. This APAR has been marked HIPER, but can also improve the current DDF processing when a rush of distributed threads enter the system at the same time.
· PM75473 – DDF threads have slow response time, and bottleneck due to 64-bit storage requests
agburke 060001QPDN Visits (693)
This APAR concerns customers who partition table spaces based on a timestamp column. When this customer moved to DB2 10 this query lost the ability to prune partitions on a partitioned table space and the response time increased dramatically. In this case the application was comparing the timestamp column to a host variable with a timestamp precision (6). The temporary fix is to use PLANMGMT to fall back to the previous access path.
· PM81692 – performance degradation for table partitioned on timestamp
This APAR deals with table insert experiencing longer than expected space search times. This can occur when the clustering index does not provide an adequate position for insert within the table.
· PM75921 – long insert times due to inadequate search for free space in the clustering index
agburke 060001QPDN Visits (496)
Since V9 of DB2 we have allowed the ACCESS DB MODE(OPEN) to allow customers to prime buffer pools with objects that will be opened by, for instance, the first application execution of the day. This command showed poor performance previously as it was single threaded. In the next version of DB2 this will be done in parallel, so we have retrofitted it back to DB2 10.
· PM80779- ACCESS DB performance improvement
agburke 060001QPDN Visits (439)
There have been several customers on DB2 10 who have taken partial dumps of DB2 even with MAXSPACE set at 16GBs-32GBs. Much of this has to do with the increased use of 64-bit addressing in DB2 10 and the way z/OS treats the address space to be dumped. The shared storage pools above the 2GB bar are large, but sparsely populated, and thus should not take up much room in the dump. DB2 level 2 has seen much of this storage being dumped inadvertently. These APARs have to do with extra storage being dumped in some cases, and not enough being dumped in others.
· OA39596 – excessive HVSHARE storage included in DUMPs when it is not needed
· OA40015 – SVCDUMP missing high virtual user region storage
· OA40015 – High virtual regions missing from dump
timmzzz 060000RTR7 Visits (1303)
When running more than one DB2 z/OS subsystem on a single LPAR, there is a danger a "runaway" (e.g. storage leak) DB2 subsystem is taking out the LPAR. As this could cause a domino effect of outages caused by that single DB2 subsystem, REALSTORAGE_MAX subsystem parameter was introduced. This parameter was hidden ZPARM SPRMRSMX in DB2 V9 and now with PM24723/PK18354 becomes opaque. With DB2 10 you now have REAL
As mentioned above, it is recommend to set REALSTORAGE_MAX if more than one DB2 subsystems resides on a single LPAR. The best practices as of today are to set this parameter 1.5 to 2 x from the normal DB2 subsystem storage usage. If REALSTORAGE_MAX (DB2 10) or SPRMRSMX in DB2 9 is set to low, the DB2 subsystem might gets terminated before a real storage problem exists. If you set it to high, the LPAR might be gone before the parameter is being considered.
In addition new DB2 10 messages are going along with the actions
timmzzz 060000RTR7 Visits (1093)
Customer X experienced Partition-By Growth (PBG) size increase with APPEND YES. Assuming the data is sparsely distributed across the PBG partitions. The following ONLINE REORG redistributed the data to the first 3 partitions. However, after the SWITCH PHASE, concurrent SQL would still append data to the last partition, leaving many partitions empty in between.
timmzzz 060000RTR7 Visits (777)
As of DB2 10 NFM, the Real-Time Statistics logic has been extended to cope with mass delete operations more accurate. Before DB2 10 NFM, the MASSDELETE counters have been incremented after a mass delete oper
A multi table table-space scenario is an exception. As the Real-Time Statistics information is based on page set level, DB2 does not know how many rows are associated to the table the mass delete was operated on. In this case, the SYSI
However, an APAR will be opened to be consistent with SYSI
PM81189: MSGDSNT217I ON REBIND PACKAGE SWITCH AND NO ROW IN SYSIBM.SYSPACKCOPY IN V10 NFM FOR PACKAGE THAT USES CGTT
timmzzz 060000RTR7 Visits (863)
Customer X experienced problems after migrating to DB2 10 NFM. Selected packages where rebound right after the migration. PLANMGMT EXTEND as the default in V10 NFM, shouldallow to return to prior access path if needed.
However, for some packages the old access path could not be reverted even under PLANMGMT EXTENDED.
The reason behind this are Created Global Temp Tables (CGTT). They do have a dependency on SYSPKAGE . During ENFM, when SYSPKAGE is dropped, all packages containing CGTTs
When running a REBIND in DB2 10 NFM, REBIND will return an error message and no previous copy is stored.
Please refer to APAR PM81189 for further details on this problem.
GZJ 1100006WMT Visits (651)
If you are planning for migration to DB2 10, then you should make sure you read this whitepaper by IBM Distinguished Engineer John Campbell.You have to log in using your IBM ID; if you don't have one then you'll need to register. This 28-page whitepaper is based on solid customer experience and gives sound, practical advice on migration that you should be aware of. The advice on maintenance is not a 'nice to do' item but is essential for a good experience.
GZJ 1100006WMT Visits (547)
Planning on using DB2 10 statement level access plan hints? Then you need to read about the new SET_PLAN_HINT stored procedure. This stored procedure is a "common SQL API stored procedure" that validates and deploys optimization hints for SQL statements. Because of the volume of information in this area, the web page linked to above is actually an anchor point for a series of Information Center topics which have been updated to support this new functionality.
GZJ 1100006WMT Visits (576)
Check out this handy link for access to key DB2 for z/OS information resources. The information here is useful to DB2 professionals of all levels of experience, from novice to old hand.
GZJ 1100006WMT Visits (955)
If you are migrating from DB2 Version 9 or DB2 Version 8 to DB2 10, then you need to check out Info APAR II14702. This APAR provides guidance on avoiding know issues both when moving to Conversion Mode(CM), and when entering Enabling New Function Mode (ENFM).
You can read this APAR here: II14702: AVOIDING MIGRATION ISSUES FOR FASTER MIGRATION HINTS & TIPS TO V10
Note that this APAR is in INTRAN status and such is likely to be updated as new information becomes available and as new issues are encountered. For that reason, this blog entry does not contain the APAR text. Instead, we strongly recommend that you read the APAR on a regular basis to ensure that you don't miss any important updates.
We further recommend that you track and remain alert on any HIPER maintenance relating to migration and to the ENFM process.The use of extended HOLD DATA is advisable.
In particular we'd like to draw your attention to the advice to perform ENFM in maintenance mode if at all possible, and if you are data sharing to shut down all members apart from the one in maintenance mode where you plan to run ENFM. If this is not possible, then we recommend that you run ENFM at as quiet a time as possible.
flodubois 270000K6H5 Visits (2177)
Here is the story:
Now, what follows is NOT meant to sound like a sales pitch, but the correct technical solution is to add more zIIPs. It would be prudent to have the zIIPs run in the 30-50% CPU busy range on average (peaks would obviously be higher). zIIPs should be thought of as assist processors and they are not intended to be run as hard as GCPs. This recommendation is not specific to DB2 10, but with DB2 10, it has become even more critical. This is the option that Customer X decided to take - they have now refreshed their processor technology and increased the ratio of zIIP engines to GCP engines. Consequently the original elapsed time performance problems have gone away.
KevinHarrison 1000005N52 Visits (1004)
There are some recent REORG issues that you should be aware of, but they do not affect directory or catalog REORGs.
PM69637 - Lost data on REORG if table has OBID=1. This defect applies to V9 also.
PM62449 - Lost data on REORG if table has OBID=1 in a segmented tablespace. V10 only.
PM73000 - Incorrect rows discarded with REORG DISCARD based on boolean logic conditions. V10 only.
PM68133 - REORG of an RRF PBG with growth of a new partition during the REORG and with zparm SPRMRRF set to DISABLE.
PM69073 - Same scenario as PM68133 but in addition requires that the REORG be materializing a pending alter.
PM61976 - REORG loads data into wrong partition if using SORTDATA NO (which is not the default). Caused by PE APAR PM44475.
PM66874 - Not specifically a REORG issue, but could occur during extend processing for REORG of LOB data.
PM63324 - REORG of multiple partitions of a compressed PBG tablespace can result in rows being compressed incorrectly
KevinHarrison 1000005N52 Visits (780)
If you want to REORG a LOB tablespace then you can just REORG that LOB tablespace. You want to run REORG of the base tablespace using AUX YES. This will work fine, but if you're doing this against a compressed SPT01 then make sure you do not also specify KEEPDICTIONARY until you have the fix on for PM67696.
Also you want to make sure you have PM66874 applied, when running the REORG.
Before you go ahead with the REORGs, we discussed that you would validate that there isn't any dormant corruption that REORG could trip over.
To do that, I recommend you run the following:
CHECK LOB SHRLEVEL CHANGE
CHECK INDEX SHRLEVEL CHANGE on the aux index
CHECK DATA SHRLEVEL CHANGE on the base tablespace using SCOPE AUXONLY AUXERROR REPORT.