Michael_D. 1100004WAH Visits (3396)
Michael_D. 1100004WAH Visits (3366)
Customers running with DB2 10 NFM have noticed that table spaces associated with the DB2 directory data base or catalog (DSNDB01 and SPT01) experience significant space growth resulting from BIND/REBIND operations, DDL, utility activity (Reorg). The only way customer could sustain the issue was by frequently scheduling reorgs on the DB2 directory and catalog in DB2 10 NFM.
Updated list of APARs for excessive SPT01/DBD01 growth – Base and LOB tablespaces - including related Utility APARs in this area!
DB2 10 NFM Cat/Dir SPT01/DBD01 excessive growth
APAR PM66874 to resolve: LOB integrity abend during REORG of DBD01.
DB2 10 NFM Cat/Dir SPT01/DBD01 excessive growth related
APAR PM68842 to resolve: REORG abend. Broken aux index.
agburke 060001QPDN Visits (3334)
Workfiles have changed quite a bit from V8 -> V9. When we moved from DB2 V8 to V9 we combined the Temp. database (DGTTs) and Workfile database, and began favoring 32k table spaces if the records were over 100 bytes in length. Customers faced issues with runaway DGTTs eating up valuable workfile table spaces and impeding other work.
So IBM introduced a zParm to reestablish a deliniation between the two uses.
We then came out with a best practices informational APAR based on what customers had seen.
With the advent of DB2 10 V8 customers were introduced to this zParm after the fact, and were not always ready for the implications.
As part of the best practices we suggest when going into V9 or V10 size the 4k workfile buffer pool (BP7?) and the 32k workfile buffers (BP32k7?) equally in CM mode. You will need to create more 32k workfile tablespaces as well sinc not just DGTTs will use these now. The amount of space used by 4k and 32k tables for workfiles, as well as when DB2 wanted a 32k table, but was not able to get one is exposed in the Statistics report.
Once you get a handle on how much the 32k tables are used, increase them. In the field a 75 / 25 split is usually seen where 75% of the time 32k tables are favored, and hence 75% of the workfile space should be allocated to 32k table spaces.
In V10 DGTTs favor Partition By Growth tables first, then table spaces with >0 secondary quatities, then lastly those with 0 secondary quatities. So as a soft separation you might want to add some PBGs to the workfile database if DGTTs have been an issue.
timmzzz 060000RTR7 Visits (3211)
Customer X experienced Partition-By Growth (PBG) size increase with APPEND YES. Assuming the data is sparsely distributed across the PBG partitions. The following ONLINE REORG redistributed the data to the first 3 partitions. However, after the SWITCH PHASE, concurrent SQL would still append data to the last partition, leaving many partitions empty in between.
GZJ 1100006WMT Visits (3210)
It is a common belief that DB2 10 and 11 for z/OS can only use 1MB size pageable large pages, other than for buffers, if the CEC has Flash Express installed. For example, the text for APAR PM85944 infers just that. However, this is not true. The only requirement is that the CEC be SCM-capable (SCM = Storage-Class Memory). In other words, it does not matter whether Flash Express is actually installed or not. So if a customer is running on a zEC12 CEC without Flash Express, DB2 could request, and be given, a 1MB size pageable large page, residing in a 1MB size large page frame. However if that page needs to get paged out for some reason, at that point it will be broken down into 4KB page frames. To put it another way, pageable large pages are available on a zEC12 capable CEC with the caveat that if Flash Express is not installed, then if those pages are ever paged out they will be demoted to 4KB page frames. 1MB pageable large pages that are demoted to 4KB page frames as they paged out will never be coalesced: that is, they will remain 4KB page frames for the remaining life of the IPL.
flodubois 270000K6H5 Visits (3125)
Below is a set of optimiser-related DB2 system parameters (ZPARMS) and their V10 default values. If the setting of any of these system parameters in your environment does NOT match the V10 default, then please re-evaluate the setting before migrating to DB2 10 for z/OS. If you need special assistance from IBM, please open a problem record (PMR).
MACRO ZPARM DEFAULT V10
DSN6SPRM OPTIOWGT ENABLE
DSN6SPRM OPTIXIO ON
DSN6SPRM OPTXQB ON
DSN6SPRM STATCLUS ENHANCED
Michael_D. 1100004WAH Visits (3070)
Higher CF CPU utilization in a DB2 10 for z/OS data sharing environment during Delete_Name processing. Delete_Name requests/sec can be significantly higher in DB2 10 for z/OS than in DB2 9 for z/OS. In V10, when the pageset/partition becomes non GBP dependent, the Delete_Name process deletes only data entries to avoid cross invalidation processing at that time, and to allow cleanup of the directory
entries later when other pages are registered. The V9 Delete_Name process deleted both data and directory entries. In DB2 10 for z/OS, the Delete_Name requests/sec can be significantly higher because CFCC processing is not as efficient when only data entries are deleted.
PM51467: CF DELETE_NAME PERFORMANCE IN DB2 10 FOR Z/OS http
agburke 060001QPDN Visits (3061)
CPU increase is always a customer concern, expecially when it occurs across a release boundary. Many of our customers in DB2 10 have seen a CPU reduction in the DB2 address spaces, and much of this can be attributed to the zIIP eligibility of asynchronous reads/ writes. However there have been some circumstances where the new monitoring or storage allocation behavior has affected CPU negatively.
Here are some APARs to address CPU degredation at the address space level.
PM56363 (HIPER): DIST TCB time increase due to remote SIGNON requ
agburke 060001QPDN Visits (3053)
Many customers these days are utilizing DASD mirroring solutions as well as Hyperswap technology to automate fail-over to an alternate site or to local DASD hardware in the event of a failure or disaster. z/OS APAR OA31707 was put out to aid in the event of a fail-over by ensuring any pages it might need would not be paged out to AUX.
From OA31707: "During a Hyperswap, it is possible for the system to require page fault resolution via page devices that may be part of the scope of devices being recovered by the Hyperswap. If this occurs, it is possible that a page fault will not be able to be resolved leading to deadlock and Hyperswap failures."
The downside of this is a massive amount of page fixed storage which includes the following:
The purpose of this entry is to ensure customers are aware of the effect on real storage when this function is enabled, and to plan for it in advance. A system that is already running lean on REAL storage may see increased demand paging once this function is enabled, which can lead to DB2 entering DISCARD MODE (contraction) due to the REAL
If you have page fixed your buffer pools then the vast majority of the DBM1 PRIVATE address space will never be paged to AUX either, so you could end up with a severe shortage of REAL storage on the LPAR.
You can issue the D XCF,COUPLE command to determine if the function is enabled.
Further important information about the protection provided by this APAR and the service it introduces can be found as follows:
Two other APARs relate to the REAL storage growth seen in DB2 due z/OS not reclaiming frames when CRITICAL PAGING was enabled....
KevinHarrison 1000005N52 Visits (3028)
Traditionally customers submitted requests for requirements and enhancements thru their IBM contacts on their respective client teams. Now customers may manage and track their requests using a new process.
DB2 for z/OS Request for Enhancements (RFE) Community for Customer Requirements - Now Live!
The DB2 for z/OS
Request for Enhancements (RFE) Community enables customers to directly submit,
manage and track their requirements online. Customers can additionally access
requirements that others have submitted to vote on, comment and watch in a
social media paradigm. It provides customers with greater accessibility to the
requirements that are of interest to them. IBM's Rational, Tivoli and WebSphere
have already adopted RFE with positive customer feedback. All you need to get
started is an IBM developerWorks IBM ID. Please use the DB2 for z/OS RFE to
submit customer requirements going forward.
flodubois 270000K6H5 Visits (2975)
Migrating an SQL stored procedure from external to native is not as simple as a DROP/CREATE. You need to understand the release incompatibilities related to SQL stored procedures, examine your external SQL procedure source code, and make any necessary adjustments. This APAR can help you do that. It provides sample job DSNTEJ67 which initiates the process of converting source for an external SQL procedure into source for a native SQL procedure. REXX services, native SQL PL and the HOST(SQLPL) checkout precompiler are combined to extract, inspect, analyze and convert external SQL procedure source code. The appropriate set of native SQL procedure options are applied and a listing of the modified SQL procedure source code is produced.
Michael_D. 1100004WAH Visits (2969)
System z Integrated Information Processor (zIIP)
The IBM zIIP is available on all IBM zEnterprise, IBM System z10 and IBM System z9 servers. The zIIP can support many technologies, and can help implement, integrate, and optimize new workloads on System z.
DB2 z/OS zIIP related performance:
PM12256: DRDA PERFORMANCE IMPROVEMENT USING TCP/IP
OA35146: NEW FUNCTION - ALLOW NON-CLIENT PREEMPTABLE SRBS TO JOIN/LEAVE AN ENCLAVE AFTER IT HAS BEGUN – z/OS APAR for PM12256PM2
DB2 z/OS informational APAR II14219 (DB2 z/OS ZIIP EXPLOITATION "SUPPORT USE" INFORMATION)
agburke 060001QPDN Visits (2894)
High Performance DBATs were introduce in DB2 10. In order to utilize this feature you must have the JCC packages (i.e. SYSNL200) bound with RELEASE(DEALLOCATE) as well as the -MODIFY DDF PKGREL(BNDOPT) option in place.
This allows distributed requests to benefit from the performance aspects of not going through deallocation after each commit. Caution should be used when employing this option, as you would not want every distributed application coming in as RELEASE(DEALLOCATE) and using up all of the available DBATs. You can more granularly control these If you bind the dynamic JCC packages into an alternate collection and then allow specific applications to use this by having them specify this collection in the CurrentPackageSet.
APAR PI20352 was opened because there were times of increased DDF SRB time seen when the thread was deallocated prior to the 200 uses. In order to alleviate this, code has been modified to allow the High Performance DBAT to be pooled if it has not reach the 200 use mark. The POOLINAC timeout value can be used to limit the time the DBAT remains in the pool.
KevinHarrison 1000005N52 Visits (2798)
There are some recent REORG issues that you should be aware of, but they do not affect directory or catalog REORGs.
PM69637 - Lost data on REORG if table has OBID=1. This defect applies to V9 also.
PM62449 - Lost data on REORG if table has OBID=1 in a segmented tablespace. V10 only.
PM73000 - Incorrect rows discarded with REORG DISCARD based on boolean logic conditions. V10 only.
PM68133 - REORG of an RRF PBG with growth of a new partition during the REORG and with zparm SPRMRRF set to DISABLE.
PM69073 - Same scenario as PM68133 but in addition requires that the REORG be materializing a pending alter.
PM61976 - REORG loads data into wrong partition if using SORTDATA NO (which is not the default). Caused by PE APAR PM44475.
PM66874 - Not specifically a REORG issue, but could occur during extend processing for REORG of LOB data.
PM63324 - REORG of multiple partitions of a compressed PBG tablespace can result in rows being compressed incorrectly
GZJ 1100006WMT Visits (2793)
The draft of the redbook Optimizing DB2 Queries with IBM DB2 Analytics Accelerator for z/OS , SG248005, is available.
The 'blurb' says:
The IBM® DB2® Analytics Accelerator Version 2.1 for z/OS (also called DB2 Analytics Accelerator or Query Accelerator in this book and in DB2 for z/OS documentation) is a marriage of the System z® Quality of Service and Netezza® technology to accelerate complex queries in a DB2 for z/OS® highly secure and available environment. Superior performance and scalability with rapid appliance deployment provide an ideal solution for complex analysis.
This IBM Redbooks® publication provides a broad understanding of the IBM DB2 Analytics Accelerator architecture and its exploitation by documenting the steps for the installation of this solution in an existing DB2 10 for z/OS environment. We define a business analytics scenario, evaluate the potential benefits of the DB2 Analytics Accelerator appliance, describe the installation and integration steps with the DB2 environment, evaluate performance, and show the advantages to existing business intelligence processes.
If you want to review the draft, and provide feedback on this exciting new feature, now is your chance!