HELP WITH HANDLING THE RELEASE INCOMPATIBLE CHANGE FOR VARCHAR( DECIMAL), CAST (DECIMAL AS CHAR), CAST(DECIMAL AS VARCHAR)
Michael_D. 1100004WAH Visits (24803)
In Version 10, the formatting of decimal data has changed for the VARCHAR function and the CAST specification with decimal input and VARCHAR or CHAR output. There are two changes:
- Remove leading zero
Michael_D. 1100004WAH Visits (13322)
PM56845: PROVIDE OPTION FOR OPTIMIZE FOR 1 ROW TO ALLOW SOME SORT ACCESS PATHhtt
In all versions of DB2, the OPTIMIZE FOR 1 ROW clause requests DB2 to choose an access path that avoids a sort. In DB2 versions prior to 10, there is a possibility to obtain an access path with a sort even though that path is strongly discouraged. In DB2 10, DB2 will not compete access paths with sorts and will instead choose the lowest cost access path that does not require a sort.
This APAR provides an option to return to the previous version OPTIMIZE FOR 1 ROW behavior. As such, it does not eliminate the risk of an inefficient access path being chosen with OPTIMIZE FOR 1 ROW when the efficient access requires a sort. However, it does limit that exposure to what already existed in DB2 prior to DB2 10.
For queries that need sorts, the recommended solution is to avoid coding the OPTIMIZE FOR 1 ROW clause. Without the OPTIMIZE FOR 1 ROW clause, DB2 will choose access paths based on cost and will not make an effort to avoid sorts.
Local work around:
For queries that need sorts for efficient access, the solution is to avoid coding the OPTIMIZE FOR 1 ROW clause.
Change application to code OPTIMIZE FOR 2 ROWS
Wait for APAR PM56845 that is now open to provide option for OPTIMIZE FOR 1 ROW to allow sort access plans
Here is the story:
Now, what follows is NOT meant to sound like a sales pitch, but the correct technical solution is to add more zIIPs. It would be prudent to have the zIIPs run in the 30-50% CPU busy range on average (peaks would obviously be higher). zIIPs should be thought of as assist processors and they are not intended to be run as hard as GCPs. This recommendation is not specific to DB2 10, but with DB2 10, it has become even more critical. This is the option that Customer X decided to take - they have now refreshed their processor technology and increased the ratio of zIIP engines to GCP engines. Consequently the original elapsed time performance problems have gone away.
Michael_D. 1100004WAH Visits (6801)
timmzzz 060000RTR7 Visits (6234)
When running more than one DB2 z/OS subsystem on a single LPAR, there is a danger a "runaway" (e.g. storage leak) DB2 subsystem is taking out the LPAR. As this could cause a domino effect of outages caused by that single DB2 subsystem, REALSTORAGE_MAX subsystem parameter was introduced. This parameter was hidden ZPARM SPRMRSMX in DB2 V9 and now with PM24723/PK18354 becomes opaque. With DB2 10 you now have REAL
As mentioned above, it is recommend to set REALSTORAGE_MAX if more than one DB2 subsystems resides on a single LPAR. The best practices as of today are to set this parameter 1.5 to 2 x from the normal DB2 subsystem storage usage. If REALSTORAGE_MAX (DB2 10) or SPRMRSMX in DB2 9 is set to low, the DB2 subsystem might gets terminated before a real storage problem exists. If you set it to high, the LPAR might be gone before the parameter is being considered.
In addition new DB2 10 messages are going along with the actions
Michael_D. 1100004WAH Visits (5966)
PM24723: IFCID 225 REAL STORAGE STATISTICS ENHANCEMENTS IFCID225
DB2 APAR PM24723 is very important and supports REAL STOARAGE monitoring issue via a new extension to IFCID 225
See z/OS APAR z/OS APAR OA37821 and corresponding DB2 APAR PM49816 for this issue
OA37821: NEW FUNCTION IARV64 REQUEST=COUNTPAGES SUPPORT FOR UNSERIALIZED PROCESSING.
Useful commands monitoring the use of 1MB size real storage page frames on z10 and z196 :
DISPLAY BUFFERPOOL(BP1) SERVICE=4
Display provides output via DSNB999I messages of how many 1MB size page frames are being used
This z/OS command shows the total LFAREA, allocation and splits it across 4KB and 1MB size frames, via IAR019I message
DFSMS Striping was first introduced in 1993 and it was a success. Striping has served a dual purpose. The first purpose was to avoid hot spots on individual disks, and the second purpose was to increase sequential throughput for individual data streams. Since 1993, alternative techniques have been developed to avoid or dissipate a “hot spot” . Some examples:
Consequently, DFSMS striping is perhaps not needed or is not the best technique for avoiding hot spots.
IBM DB2 for z/OS Development used to recommend striping because parallelism helped over slow channels, but channel speed is now so high that the disadvantage of doing more I/Os tends to outweigh any advantage of parallelism.
Just because one particular workload might benefit from log striping, does not mean that no workloads will suffer.
DB2 uses various data transfer sizes. In all current versions of DB2, dynamic prefetch reads at least 128K, but DB2 utilities read or write at least 256K. Depending on the buffer pool size, sequential prefetch may read up to 256K and utilities may read and write up to 512K. However, if a DB2 query or Insert stream or utility is CPU bound, then striping will not make it faster. This is a common situation, except when creating or restoring image copies are concerned.
Striping also will not help if any of the components in the data paths are saturated. These paths consist of channels, host adapters, and the bus in each control unit. Take for example the DB2 RECOVER utility. When you want to recover an individual table space, the paths are not saturated and striping will reduce the time to restore the table space. However, if you want to recover the entire system, there will be many parallel RECOVER jobs which will use parallelism, and the paths may get saturated, in which case striping may well not reduce the time to restore the system.
A category of I/O that can possibly benefit from striping is list prefetch. Log apply makes extensive use of list prefetch. If a table space is striped across a single RAID rank, list prefetch may possibly improve modestly if the devices consist of HDDs , and may show more improvement if the devices consist of SSDs. Heavy parallelism tends to reduce the striping benefits with HDDs, but striping continues to do well with SSDs under heavy amounts of parallelism.
To summarize the value of striping for table spaces, the most benefit may occur with list prefetch when using SSDs, and the applications that benefit the most will likely be long running queries and utilities. Striping may help with RECOVER: restore from image copies, and to then apply log records. However, everyday workloads will not necessarily get any benefit from striping.
During normal OLTP operations, DB2 active logs are written, and are read only to archive the log records and for software replication. Most log writes are synchronous I/Os and each I/O consists of a small number of 4K pages. Faster channels has increased log bandwidth. However, most OLTP workloads never push the log throughput beyond 10 MB/sec. I/O response times for online transactions gradually degrade when log buffers queue up waiting for the previous I/O to complete.
As technology has evolved, many customers have used remote DASD replication to protect themselves against disasters. Remote replication is often the cause of I/O performance problems. Since all log writes caused by OLTP are synchronous, log I/O is very susceptible to problems caused by remote DASD replication. Each I/O introduces a risk of another delay, and since striping causes more I/Os, striping increases the risk of a delay. Remote replication is not the only thing that causes stress on a control unit, but it is one of the biggest causes.
Striping the active log may help the performance of active log reads as from DB2 9 as the I/O quantity for active log reads was increased from 12 pages to 120 pages. However, given 4 Gbps and faster links, the risks of striping the active logs will likely outweigh the risks - see below.
DFSMS striping of the DB2 active log is no longer a tuning option that DB2 for z/OS Development is actively testing and recommending. There have been 'hole in the recovery log stream' issues with the use of striping with DB2 10 for z/OS in failure scenarios involving albeit in a tiny number of customers. The problem is that there is no encapsulation of a striped I/O i.e, some stripes make it and some may not when there are failures. DFSMS striping has always worked that way. So we in DB2 for z/OS Development are now actively discouraging customers from using striping when the benefits are speculative at best. For customers who are already striping active log datasets there is performance risk in removing striping, but on the otherhand the chance of hitting above problem is very rare. But nevertheless there is risk. There is also risk in using striping with table spaces with large page size i.e., torned page as not all the stripes made it
As has been true in the past, hardware technology is forever changing and the value of DFSMS striping will probably continue to erode. Just because one particular workload may possibly benefit, does not mean that no workloads will suffer.
timmzzz 060000RTR7 Visits (5259)
Having a robust preventive maintenance process is a best practice in managing any IT
environment including z/OS. By avoiding known defects, which can have a major impact on
the functioning of the system, preventive maintenance improves availability. Having a
proactive preventive service strategy reduces the number of rediscovered defects and avoids
unplanned outages. IBM recommends that preventive maintenance be installed at least two to
four times a year. In addition, IBM recommends that HIPER, PE Fix, Security/Integrity and
Pervasive PTFs be installed more frequently.
Read more about a solid preventive maintenance strategy here.
Additional information about the RSU and CST, can be found at the CST website at:
agburke 060001QPDN Visits (5131)
We have had several questions lately around z/OS 1.12 and the performance benefits seen by DB2.
A customer of mine first saw a performance degradation with z/OS 1.12 and DB2 9 with a DGTT that had many indexes defined on it. Basically any query that created and dropped many temporary objects could see a CPU increase.
PM17542 was created to avoid this overhead. Unfortunately it was marked P.E., or PTF in error and several z/OS and DB2 APARs were to follow.
Today we are back with a positive story as described in PM46045 which also summarizes the APARs that should be applied to see a performance benefit in DB2.
flodubois 270000K6H5 Visits (4709)
Fix Category HOLDDATA (FIXCAT)
What is a fix category? A fix category is an identifier used to group and associate PTFs to a particular category of software fixes. A fix category might be used to identify a group of fixes that are required to support a particular hardware device, or to provide a particular software function. Fix categories are supplied to you in the form of SMP/E FIXCAT HOLDDATA statements. Each FIXCAT HOLDDATA statement associates an APAR and its related fixing PTF to one or more fix categories.
For more information , please refer to http
Current list of FIXCAT HOLDs for DB2 for z/OS
SYSPLEXDS Fixes that enable the sysplex data sharing function or fixes required for data sharing
DB2COEXIST/K Fixes that enable DB2 releases to coexist when in data sharing mode
DB2PARALL/K Fixes for problems with the DB2 parallelism function
DB2STGLK/K Fixes for DB2 storage leak problems
DB2OVRLAY/K Fixes for DB2 storage overlay problems
DB2INCORR/K Fixes for DB2 SQL incorrect output problems
DB2MIGV9/K Fixes that allow the prior release of DB2 to migrate to or fall back from DB2 V9
DB2MIGV10/K Fixes that allow prior releases of DB2 to migrate to or fall back from DB2 V10
ISAOPT/K Fixes required for z/OS software products to support the IBM Smart Analytics Optimizer
Michael_D. 1100004WAH Visits (4693)
End of Service Announcement DB2 9 for z/OS - June 27, 2014
On February 7, 2012, IBM announced the End of Service (EOS) for DB2 9 for z/OS. The effective EOS date is June 27, 2014.
For your convenience, here is a link to the announcement letter:
Michael_D. 1100004WAH Visits (4652)
PTF UK91435 (APAR PM79520)
is required for DB2 10 for z/OS customers in a Data Sharing environment
PTF UK91435 is required for DB2 10 for z/OS customers
agburke 060001QPDN Visits (4361)
High Performance DBATs were introduce in DB2 10. In order to utilize this feature you must have the JCC packages (i.e. SYSNL200) bound with RELEASE(DEALLOCATE) as well as the -MODIFY DDF PKGREL(BNDOPT) option in place.
This allows distributed requests to benefit from the performance aspects of not going through deallocation after each commit. Caution should be used when employing this option, as you would not want every distributed application coming in as RELEASE(DEALLOCATE) and using up all of the available DBATs. You can more granularly control these If you bind the dynamic JCC packages into an alternate collection and then allow specific applications to use this by having them specify this collection in the CurrentPackageSet.
APAR PI20352 was opened because there were times of increased DDF SRB time seen when the thread was deallocated prior to the 200 uses. In order to alleviate this, code has been modified to allow the High Performance DBAT to be pooled if it has not reach the 200 use mark. The POOLINAC timeout value can be used to limit the time the DBAT remains in the pool.
GZJ 1100006WMT Visits (4314)
It is a common belief that DB2 10 and 11 for z/OS can only use 1MB size pageable large pages, other than for buffers, if the CEC has Flash Express installed. For example, the text for APAR PM85944 infers just that. However, this is not true. The only requirement is that the CEC be SCM-capable (SCM = Storage-Class Memory). In other words, it does not matter whether Flash Express is actually installed or not. So if a customer is running on a zEC12 CEC without Flash Express, DB2 could request, and be given, a 1MB size pageable large page, residing in a 1MB size large page frame. However if that page needs to get paged out for some reason, at that point it will be broken down into 4KB page frames. To put it another way, pageable large pages are available on a zEC12 capable CEC with the caveat that if Flash Express is not installed, then if those pages are ever paged out they will be demoted to 4KB page frames. 1MB pageable large pages that are demoted to 4KB page frames as they paged out will never be coalesced: that is, they will remain 4KB page frames for the remaining life of the IPL.
agburke 060001QPDN Visits (4251)
Workfiles have changed quite a bit from V8 -> V9. When we moved from DB2 V8 to V9 we combined the Temp. database (DGTTs) and Workfile database, and began favoring 32k table spaces if the records were over 100 bytes in length. Customers faced issues with runaway DGTTs eating up valuable workfile table spaces and impeding other work.
So IBM introduced a zParm to reestablish a deliniation between the two uses.
We then came out with a best practices informational APAR based on what customers had seen.
With the advent of DB2 10 V8 customers were introduced to this zParm after the fact, and were not always ready for the implications.
As part of the best practices we suggest when going into V9 or V10 size the 4k workfile buffer pool (BP7?) and the 32k workfile buffers (BP32k7?) equally in CM mode. You will need to create more 32k workfile tablespaces as well sinc not just DGTTs will use these now. The amount of space used by 4k and 32k tables for workfiles, as well as when DB2 wanted a 32k table, but was not able to get one is exposed in the Statistics report.
Once you get a handle on how much the 32k tables are used, increase them. In the field a 75 / 25 split is usually seen where 75% of the time 32k tables are favored, and hence 75% of the workfile space should be allocated to 32k table spaces.
In V10 DGTTs favor Partition By Growth tables first, then table spaces with >0 secondary quatities, then lastly those with 0 secondary quatities. So as a soft separation you might want to add some PBGs to the workfile database if DGTTs have been an issue.
Michael_D. 1100004WAH Visits (4201)
Customers running with DB2 10 NFM have noticed that table spaces associated with the DB2 directory data base or catalog (DSNDB01 and SPT01) experience significant space growth resulting from BIND/REBIND operations, DDL, utility activity (Reorg). The only way customer could sustain the issue was by frequently scheduling reorgs on the DB2 directory and catalog in DB2 10 NFM.
Updated list of APARs for excessive SPT01/DBD01 growth – Base and LOB tablespaces - including related Utility APARs in this area!
DB2 10 NFM Cat/Dir SPT01/DBD01 excessive growth
APAR PM66874 to resolve: LOB integrity abend during REORG of DBD01.
DB2 10 NFM Cat/Dir SPT01/DBD01 excessive growth related
APAR PM68842 to resolve: REORG abend. Broken aux index.
Michael_D. 1100004WAH Visits (4195)
PM42528: SUPPORT FOR DELETING A MEMBER FROM A DATA SHARING GROUP
To use this support:
- The APAR or PTF providing delete member support must be applied to all members. Since deleting a member requires all members to be stopped, there is no pre-conditioning APAR / PTF.
- The member being deleted must be quiesced with no outstanding units of work, active utilities or retained locks.
- There should be no objects in restricted states. Use the -DISPLAY DATABASE(*) RESTRICT to verify.
- All surviving members must be DB2 10 New Function Mode
- The member to be deleted must be quiesced at some point before the surviving members are stopped so that the quiesced state is saved in all the surviving members' BSDSs.
- Stop all members of the data sharing group.
- Make backup copies of all BSDSs.
- Run the change log inventory (DSNJU003) DELMBR control statement against all the group members' BSDSs to deactivate the member that is to be deleted.
- Restart the surviving members of the group.
- When the logs of the member to be deleted are no longer needed, proceed.
- Stop all members of the data sharing group.
- Make backup copies of the BSDSs.
- Run the change log inventory (DSNJU003) DELMBR command against all the group members' BSDSs to destroy the member that is to be deleted.
- Restart the surviving members of the group.
- After all surviving members have been restarted, the logs and BSDS of the deleted member are no longer needed.
DELETE data sharing member related APAR's and PTF's
APAR : PM31003, PM31004, PM31006, PM31009
PTF : UK67512, UK67958, UK69286, UK65750
Michael_D. 1100004WAH Visits (4169)
PM31614: PERFORMANCE OF PACKAGE ALLOCATION IMPROVEMENT http
Some of the code was moved to a service task to reduce the CPU time required for packages that have a very short
running SQL statements or that issue frequent commits.
flodubois 270000K6H5 Visits (3990)
Below is a set of optimiser-related DB2 system parameters (ZPARMS) and their V10 default values. If the setting of any of these system parameters in your environment does NOT match the V10 default, then please re-evaluate the setting before migrating to DB2 10 for z/OS. If you need special assistance from IBM, please open a problem record (PMR).
MACRO ZPARM DEFAULT V10
DSN6SPRM OPTIOWGT ENABLE
DSN6SPRM OPTIXIO ON
DSN6SPRM OPTXQB ON
DSN6SPRM STATCLUS ENHANCED
Michael_D. 1100004WAH Visits (3981)
Michael_D. 1100004WAH Visits (3968)
DB2 10 can reduce the total DB2 CPU demand from 5-20% when you take advantage of all the enhancements. Many CPU reductions are built in directly to DB2, requiring no application changes. Some enhancements are implemented through normal DB2 activities through rebinding, restructuring database definitions, improving applications, and utility processing. The CPU demand reduction features have the potential to provide significant total cost of ownership savings based on the application mix and transaction types.
Improvements in optimization reduce costs by processing SQL automatically with more efficient data access paths. Improvements through a range-list index scan access method, list prefetch for IN-list, more parallelism for select and index insert processing, better work file usage, better record identifier (RID) pool overflow management, improved sequential detection, faster log I/O, access path certainty evaluation for static SQL, and improved distributed data facility (DDF) transaction flow all provide more efficiency without changes to applications. These enhancements can reduce total CPU enterprise costs because of improved efficiency in the DB2 10 for z/OS.
DB2 10 includes numerous performance enhancements for Large Objects (LOBs) that save disk space for small LOBs and that provide dramatically better performance for LOB retrieval, inserts, load, and import/export using DB2 utilities. DB210 can also more effectively REORG partitions that contain LOBs.
This IBM Redbooks® publication® provides an overview of the performance impact of DB2 10 for z/OS discussing the overall performance and possible impacts when moving from version to version. We include performance measurements that were made in the laboratory and provide some estimates.
Keep in mind that your results are likely to vary, as the conditions and work will differ.
In this book, we assume that you are somewhat familiar with DB2 10 for z/OS.
See DB2 10 for z/OS Technical Overview, SG24-7892-00, for an introduction to the new functions.
DB2 10 re-introduced the ability for remote connections to benefit from the dynamic JCC packages being bound with RELE
1) The package that the remote application will use must be bound with RELEASE(DEALLOCATE) i.e. SYSLN200 for cursor stability without HOLD and SYSLH200 for cursor stability WITH HOLD. We would not suggest rebinding all of the dynamic JCC packages that are used by all applicaitons to use DEALLOCATE. You should bind a copy of the packages under another collection (other than NULLID), and use the currentPackageSet special register to point those privileged applications to those packages.
2) To enable or disable DDF from treating the packages as DEALLOCATE you would use the -MODIFY DDF PKGREL(BNDOPT) command to turn on and -MODIFY DDF PKGREL(COMMIT) to turn off the high performance DBAT behavior. This allows you to force the packages to behave as RELEASE(COMMIT) when you need DDL, BINDs, or utilities to be able to break-in.
As a safety valve the high performance DBATs could only be reused 200 times before they were destroyed, in order to avoid thread footprint growth, and release certain locks.
None of this is new, but IBM released PI20352 (UI19900) mid-2014 to help scenarios where the connections were dropping before the thread was reused 200 times. When this occurred the DIST non-preemptible SRB cpu time could be driven up and the performance benefit of high performance DBATS would degrade. PI20352 allowed high performance DBATs to be pooled, and reused.
However it was discovered that when these pooled DBATs were reused by a new request from a different application some of the locks could persist from the previous thread. Due to these package allocation and table space intent locks hanging around IBM has now introduced the -MODIFFY DDF PKGREL(BNDPOOL) option with PI31597. This option is for customers that want the DBATs to be pooled in case the connection goes away. Meaning you want the same application to come in and reuse that DBAT because the locks could persist on that thread.
Here is a description for the 3 options for PKGREL.
With PI31597 there is another option for the -MODIFY DDF PKGREL() command:
BNDPOOL - New - The rules of the RELEASE bind option that was specified when the package was bound are applied to any package that is used for remote client processing. The high performance DBAT used by a terminating client connection will be pooled. BNDOPT - if you have PI20352 applied WITHOUT PI31597 then the BNDOPT behavior is like that of BNDPOOL where the DBAT will be pooled, possibly leading to the accumulation of locks. - The rules of the RELEASE bind option that was specified when the package was bound are applied to any package that is used for remote client processing. The high performance DBAT used by a terminating client connection will be deallocated. BNDOPT is the default value of the MODIFY DDF PKGREL command. COMMIT - The rules of the RELEASE(COMMIT) bind option are applied to any package that is used for remote client processing. COMMIT is the default value when the CMTSTAT subsystem parameter is set to ACTIVE. If the MODIFY DDF PKGREL command had never been issued, then COMMIT is the default value. No high performance DBATs exist while the PKGREL option is set to COMMIT.
So how do I know which option to use?
The 2 key points here are whether or not the application does more than 200 commits before deallocating or disconnecting, AND whether or not you have segregated different applications to different collections for the DEALLOCATE packages.
To determine such candidates you can look at IFCID 365, as below and compare the 'INITIATED FROM REMOTE SITE' vs. the 'DEALLOCATED CONVERSATIONS'. If the ratio is 200:1 or greater then it is a great candidate for DEALLOCATE in general.
You can also look in the accounting short report and compare the #DDFS (occurrences) vs. the CONV (conversations). Here we see 244 occurrences and 0 conversations ended, so it would be a prime candidate.
Given you know the behavior of the application as shown above the settings for PKGREL() could be:
COMMIT - is the default and you are not utilizing High Performance DBATs
BNDOPT - if you your applications do not deal
BNDPOOL - if the applications do not reuse the thread 200 times as shown above, but there is only 1 application per JCC package collection (not using the default NULLID collection name) then it would not matter as much that the locks persist across threads, because they are accessing the same objects. However you would get the CPU savings of having the threads pooled so the thread could actually be reused 200 times before it is destroyed.
timmzzz 060000RTR7 Visits (3711)
Customer X experienced Partition-By Growth (PBG) size increase with APPEND YES. Assuming the data is sparsely distributed across the PBG partitions. The following ONLINE REORG redistributed the data to the first 3 partitions. However, after the SWITCH PHASE, concurrent SQL would still append data to the last partition, leaving many partitions empty in between.
agburke 060001QPDN Visits (3588)
Many customers these days are utilizing DASD mirroring solutions as well as Hyperswap technology to automate fail-over to an alternate site or to local DASD hardware in the event of a failure or disaster. z/OS APAR OA31707 was put out to aid in the event of a fail-over by ensuring any pages it might need would not be paged out to AUX.
From OA31707: "During a Hyperswap, it is possible for the system to require page fault resolution via page devices that may be part of the scope of devices being recovered by the Hyperswap. If this occurs, it is possible that a page fault will not be able to be resolved leading to deadlock and Hyperswap failures."
The downside of this is a massive amount of page fixed storage which includes the following:
The purpose of this entry is to ensure customers are aware of the effect on real storage when this function is enabled, and to plan for it in advance. A system that is already running lean on REAL storage may see increased demand paging once this function is enabled, which can lead to DB2 entering DISCARD MODE (contraction) due to the REAL
If you have page fixed your buffer pools then the vast majority of the DBM1 PRIVATE address space will never be paged to AUX either, so you could end up with a severe shortage of REAL storage on the LPAR.
You can issue the D XCF,COUPLE command to determine if the function is enabled.
Further important information about the protection provided by this APAR and the service it introduces can be found as follows:
Two other APARs relate to the REAL storage growth seen in DB2 due z/OS not reclaiming frames when CRITICAL PAGING was enabled....
agburke 060001QPDN Visits (3546)
CPU increase is always a customer concern, expecially when it occurs across a release boundary. Many of our customers in DB2 10 have seen a CPU reduction in the DB2 address spaces, and much of this can be attributed to the zIIP eligibility of asynchronous reads/ writes. However there have been some circumstances where the new monitoring or storage allocation behavior has affected CPU negatively.
Here are some APARs to address CPU degredation at the address space level.
PM56363 (HIPER): DIST TCB time increase due to remote SIGNON requ
Michael_D. 1100004WAH Visits (3514)
Higher CF CPU utilization in a DB2 10 for z/OS data sharing environment during Delete_Name processing. Delete_Name requests/sec can be significantly higher in DB2 10 for z/OS than in DB2 9 for z/OS. In V10, when the pageset/partition becomes non GBP dependent, the Delete_Name process deletes only data entries to avoid cross invalidation processing at that time, and to allow cleanup of the directory
entries later when other pages are registered. The V9 Delete_Name process deleted both data and directory entries. In DB2 10 for z/OS, the Delete_Name requests/sec can be significantly higher because CFCC processing is not as efficient when only data entries are deleted.
PM51467: CF DELETE_NAME PERFORMANCE IN DB2 10 FOR Z/OS http
Michael_D. 1100004WAH Visits (3469)
System z Integrated Information Processor (zIIP)
The IBM zIIP is available on all IBM zEnterprise, IBM System z10 and IBM System z9 servers. The zIIP can support many technologies, and can help implement, integrate, and optimize new workloads on System z.
DB2 z/OS zIIP related performance:
PM12256: DRDA PERFORMANCE IMPROVEMENT USING TCP/IP
OA35146: NEW FUNCTION - ALLOW NON-CLIENT PREEMPTABLE SRBS TO JOIN/LEAVE AN ENCLAVE AFTER IT HAS BEGUN – z/OS APAR for PM12256PM2
DB2 z/OS informational APAR II14219 (DB2 z/OS ZIIP EXPLOITATION "SUPPORT USE" INFORMATION)
agburke 060001QPDN Visits (3452)
The informational APAR (II14660) has a number of important APARs to keep track of. It also lists the FIXCAT categories that you can use within SMPE to filter what APARs you are most interested in during your preparation for migration.
Two examples are : IBM.
flodubois 270000K6H5 Visits (3431)
Migrating an SQL stored procedure from external to native is not as simple as a DROP/CREATE. You need to understand the release incompatibilities related to SQL stored procedures, examine your external SQL procedure source code, and make any necessary adjustments. This APAR can help you do that. It provides sample job DSNTEJ67 which initiates the process of converting source for an external SQL procedure into source for a native SQL procedure. REXX services, native SQL PL and the HOST(SQLPL) checkout precompiler are combined to extract, inspect, analyze and convert external SQL procedure source code. The appropriate set of native SQL procedure options are applied and a listing of the modified SQL procedure source code is produced.
KevinHarrison 1000005N52 Visits (3392)
Traditionally customers submitted requests for requirements and enhancements thru their IBM contacts on their respective client teams. Now customers may manage and track their requests using a new process.
DB2 for z/OS Request for Enhancements (RFE) Community for Customer Requirements - Now Live!
The DB2 for z/OS
Request for Enhancements (RFE) Community enables customers to directly submit,
manage and track their requirements online. Customers can additionally access
requirements that others have submitted to vote on, comment and watch in a
social media paradigm. It provides customers with greater accessibility to the
requirements that are of interest to them. IBM's Rational, Tivoli and WebSphere
have already adopted RFE with positive customer feedback. All you need to get
started is an IBM developerWorks IBM ID. Please use the DB2 for z/OS RFE to
submit customer requirements going forward.
agburke 060001QPDN Visits (3276)
First is the Red Alert for PM51093 that was recently posted on the Red Alert website: http
Other potential data loss HIPERs include:
Correction: APAR remains open. Corrective relief, AM55070, is available from DB2 Technical Support. The estimated PTF availability date is April 6.
Notes: This is a problem specific to freespace reuse and later rollback. There should be no latent data corruption. i.e. the only way a customer would have data corruption is if they actually encountered an abend during rollback, in which case the page would be marked broken and they should report the problem to IBM Support.
There is no need for customers to proactively check for problems. If such is desired, then any utility such as DSN1COPY, REORG, COPY or any application access would report a page marked broken.
Correction: PTF UK76352 is available (not yet RSU)
Notes: There is no actual data loss, however, the potential exists for corrupted data pages and orphan pointer records. Customers wishing to proactively check for this condition can use DSN1COPY with the CHECK option, which runs offline and is non-disruptive.
However, the IBM COPY utility would automatically detect corrupted data pages, so a normal backup cycle should be sufficient to validate data. If DB2 detects a problem, then the page may be marked broken. This can be reset by REPAIR using REPAIR LOCATE db.ts PAGE(nnnn) RESET. The IBM REORG utility will correct the page corruption.
KevinHarrison 1000005N52 Visits (3267)
There are some recent REORG issues that you should be aware of, but they do not affect directory or catalog REORGs.
PM69637 - Lost data on REORG if table has OBID=1. This defect applies to V9 also.
PM62449 - Lost data on REORG if table has OBID=1 in a segmented tablespace. V10 only.
PM73000 - Incorrect rows discarded with REORG DISCARD based on boolean logic conditions. V10 only.
PM68133 - REORG of an RRF PBG with growth of a new partition during the REORG and with zparm SPRMRRF set to DISABLE.
PM69073 - Same scenario as PM68133 but in addition requires that the REORG be materializing a pending alter.
PM61976 - REORG loads data into wrong partition if using SORTDATA NO (which is not the default). Caused by PE APAR PM44475.
PM66874 - Not specifically a REORG issue, but could occur during extend processing for REORG of LOB data.
PM63324 - REORG of multiple partitions of a compressed PBG tablespace can result in rows being compressed incorrectly
agburke 060001QPDN Visits (3254)
Customers requested a new object state which would allow readers access to data, eliminate the need to, or effect of a -STOP or -START command to place it in Read Only Status. This Persistent Read Only status causes UPDA
Recommended procedure for setting the PRO restricted status on a table space partition:
1. -STOP DB() SP() PART() - wait for the command to complete
successfully, verify that the object is in STOP status
(not STOPP) with the -DIS DB command.
2. -START DB() SP() PART() ACCESS(UT)
3. Create two full image copies of the table space partition
with COPY SHRLEVEL REFERENCE
4. Use REPAIR utility to turn on PRO status
5. -START DB() SP() PART() ACCESS(RW)
Along with the preceding APAR the lab has modified the DSNACCOX stored procedure to be able to handle the new PRO status to avoid unnecessary Utility suggestions.
flodubois 270000K6H5 Visits (3221)
Q. If PIECESIZE is specified at maximum of 64GB and (LARGE or DSSIZE x is specified on tablespace), then is the maximum pageset size going to be 64GB*4096 parts =256TB?
A. The number of index data sets is influenced by the number of data partitions. Initially using PIECESIZE of 64GB you could not index 128TB of data using DSSIZE 128 GB or 256 GB due to the reduced number of index data sets. APAR PM42175 was created to allow the specification of 128 GB and 256 GB PIECESIZE.
Specification of 64GB is only allowed when the number of data partitions (derived from DSSIZE) is less than the following maximum data parts.
Chart for 128GB PIECESIZE
Chart for 256GB PIECESIZE
Q. Now that DB2 can support a DSSIZE of 256GB for tablespace what is the new max size for the NPI?
Q. Is the only way to influence it - by increasing DSSIZE for the tablespace?
Q. Is there a way to calculate how much is space is actual used/left to grow based on the Catalog or RTS?
timmzzz 060000RTR7 Visits (3191)
As of DB2 10 NFM, the Real-Time Statistics logic has been extended to cope with mass delete operations more accurate. Before DB2 10 NFM, the MASSDELETE counters have been incremented after a mass delete oper
A multi table table-space scenario is an exception. As the Real-Time Statistics information is based on page set level, DB2 does not know how many rows are associated to the table the mass delete was operated on. In this case, the SYSI
However, an APAR will be opened to be consistent with SYSI
GZJ 1100006WMT Visits (3189)
Because of changes in the way DB2 10 calls Media Manager, a defect in media manager with the use of striped data sets was exposed: see APAR
*If you use striping and are at DB2 10, make sure this HIPER APAR is applied.
agburke 060001QPDN Visits (3161)
As many of you know IBM has changed the products that come in the DB2 Accessories Suite.
At one point we had the DB2 Management Clients Package which contained Visual Explain and Control Center, and anything that would be installed on your desktop.
Starting in DB2 9 for z/OS we changed it to the DB2 Accessories Suite which included things like Optimization Service Center, OmniFind Text search, and Spatial Support. Within that same release we deprecated OSC, and with DB2 10 for z/OS the Accessories Suite included only the spatial support and international Unicode support (ICU).
The link below breaks down which products appear in which release of the Accessories Suite, and relates it to the appropriate release of DB2.
URL for II14
Program Directory: V02.02.00
Program Number: 5697-Q02
FMIDs H2AF210, J2AG210 for use with z/OS <is the same, it has
IBM DB2 Accessories Suite for z/OS, V2.2 (5697-Q02) consists
of three features, each bundling components designed to
enhance your use of DB2 for z/OS. Component changes in this
o IBM SPSS Modeler Server Scoring Adapter for DB2 on z/OS
o IBM Data Studio V3.1.1
o IBM Text Search for DB2 for z/OS, which replaces IBM
Omnifind Text Search Server for DB2 for z/OS
o IBM Installable Information Management Software for z/OS
Solutions Information Center
KevinHarrison 1000005N52 Visits (3134)
LOB integrity abend during REORG of DBD01. Fix - PM66874,
Issue with catalog/directory space growth. FIX - PM64226 resolve for LOBs only!
For the base SPT01 growth Fix in separate APAR PM74659 not available - this will take some time!
Unformatted pages in SYSDBDXA directory LOB resulting in abends. No broken LOB data. REORG of tablespace to correct. APAR PM64226 to resolve.
Directory LOB extend failure. Fixed with APAR PM66874
REORG abend. Broken aux index. Fixed with APAR PM68842
GZJ 1100006WMT Visits (3130)
Open APAR PM64226 describes an issue where catalog and directory LOB objects can grow considerably when a long READ claim is held on the object. The APAR refers specifically to DSNDB01.SYSDBDXA but goes on to say the problem can occur for other catalog and directory objects. You should track this APAR if you are already at DB2 10 NFM or are planning to move to NFM in the near to medium future.
The problem applies to both data sharing and non-data sharing systems, so all customers need to be vigilant.
PM64226 was open at the time of writing (27 June 2012)
KevinHarrison 1000005N52 Visits (3121)
Overview:PM54383 solves this condition, in V9 and higher, but only targets the condition WRT TCP/IP related connections.
If an application executes SQL that refers to a remote site but, for what ever reason, the operator/user wants to cancel the associated DB2 thread or application, the cancel of the DB2 thread will not be fully effective until the remote system actually replies with its SQL answer (and depending on the SQL, that could be a long time). This is a historical deficiency with DB2 cancel processing, and is documented in the CANCEL THREAD Usage Notes in the DB2 Command Reference manual and a procedure was provided on how to use network stack (VTAM or TCP/IP) commands to kill the connection with the remote site. This procedure works but could be cumbersome.
GZJ 1100006WMT Visits (3107)
The draft of the redbook Optimizing DB2 Queries with IBM DB2 Analytics Accelerator for z/OS , SG248005, is available.
The 'blurb' says:
The IBM® DB2® Analytics Accelerator Version 2.1 for z/OS (also called DB2 Analytics Accelerator or Query Accelerator in this book and in DB2 for z/OS documentation) is a marriage of the System z® Quality of Service and Netezza® technology to accelerate complex queries in a DB2 for z/OS® highly secure and available environment. Superior performance and scalability with rapid appliance deployment provide an ideal solution for complex analysis.
This IBM Redbooks® publication provides a broad understanding of the IBM DB2 Analytics Accelerator architecture and its exploitation by documenting the steps for the installation of this solution in an existing DB2 10 for z/OS environment. We define a business analytics scenario, evaluate the potential benefits of the DB2 Analytics Accelerator appliance, describe the installation and integration steps with the DB2 environment, evaluate performance, and show the advantages to existing business intelligence processes.
If you want to review the draft, and provide feedback on this exciting new feature, now is your chance!
PM81189: MSGDSNT217I ON REBIND PACKAGE SWITCH AND NO ROW IN SYSIBM.SYSPACKCOPY IN V10 NFM FOR PACKAGE THAT USES CGTT
timmzzz 060000RTR7 Visits (3105)
Customer X experienced problems after migrating to DB2 10 NFM. Selected packages where rebound right after the migration. PLANMGMT EXTEND as the default in V10 NFM, shouldallow to return to prior access path if needed.
However, for some packages the old access path could not be reverted even under PLANMGMT EXTENDED.
The reason behind this are Created Global Temp Tables (CGTT). They do have a dependency on SYSPKAGE . During ENFM, when SYSPKAGE is dropped, all packages containing CGTTs
When running a REBIND in DB2 10 NFM, REBIND will return an error message and no previous copy is stored.
Please refer to APAR PM81189 for further details on this problem.
Michael_D. 1100004WAH Visits (3054)
APAR PM81799: ABEND04E RC00C90101 IN DSNIREDO ERQUAL5005 DURING A RECOVER UTILITY.
All DB2 10 for z/OS users who use COPY or RECOVER online utility with FLASHCOPY
DB2 has been modified to ensure compensation log records are not written while COPY FLASHCOPY is running.
Additionally, this APAR will insure ensure that error conditions are handled correctly.
All DB2 10 for z/OS users are affected who use COPY or RECOVER online utility with FLASHCOPYAll DB2 10 for z/OS users who use COPY or RECOVER online
utility with FLASHCOPY
agburke 060001QPDN Visits (3040)
There is a new APAR, which is still open, which deals with release incompatible changes in V10.
PM66095: help with handling release incompatible change for VARCHAR(DECIMAL), CAST(DECIMAL AS CHAR), CAST(DECIMAL AS VARC
1 - remove leading zero
2 - no trailing decimal point
V10 result is '1'
V9 result is '1.'
IFCID 366 can be used to check for the occurrence of this SQL in your subsystem.
Here is a chart that will be updated in the Information Center, and later the installation guide.
Michael_D. 1100004WAH Visits (3030)
In preparation to DB2 10 migration there is a need to answer the question if DB2 tooling will support the full functionality of a given DB2 version or will it tolerate the new version and worse if it will not support the new version. Mainly this will be most important if new functionality are effectively used and if the migration of a given DB2 environment completed the conversion to new function mode ( DB2 10 NFM ). Nevertheless, is is also important to watch and maintain the DB2 10 maintenance level to answer the question supported or tolerated.
This table provides information regarding DB2® Tools support for DB2® 10 for z/OS.
For more information on which new functions of DB2 10 are utilized by any specific product, or to find out about DB2 10 compatibility for products or releases not included in this matrix, please contact your IBM Customer Service Representative.
agburke 060001QPDN Visits (3005)
Many of you, including our own L2 performance team have spent many years creating spreadsheets to combine the stats and accounting data to reveal things like average CPU seconds or elapsed time per occurrence or COMMIT based on the CONNTYPE. This can be used to get a profile of the typical transactions, then look for outlyers based on top 10 reports. Now there is a new IFCID that can aggregate such information, and it can be started by using the STATS CLASS 9 trace.
· PM62797 – IFCID 369 addition to aggregate stats and accounting information in Stats class 9
· PM72949 for OMPE V5.1.1
You will want to check with your vendors to determine when/if your other accounting and stats reports can take advantage of this new functionality.
agburke 060001QPDN Visits (2998)
As you may remember we published the workfile recommendations in informational APAR II14587.
This had to do with the SECQTY attribute and whether or not you set the WFDB
In DB2 10 we allow partition by growth table spaces in the workfile database, and these should be the 'most preferred' when it comes to DGTTs.
Here are 2 APARs that have to do with this functionality in DB2 10.
PM65767: DB2 10 NFM work file separation algo
agburke 060001QPDN Visits (2987)
A DB2 9 for z/OS customer in NFM ran into an issue where DB2 began to run short on 31-bit virtual storage and DSNVMON messages appeared in the log:
DSNV508I -DB2 DSNVMON - DB2 DBM1 BELOW-THE-BAR 351
At the same time the CICS regions processing slowed down significantly, and these threads remained in DB2 longer than normal.
The situation manifested itself in the following way.
While Strobing DB2 (v9 NFM) with CICS Strobe 'hung',
and failed with:
STR3111E STRBFIFA.PrIf2000 Fif* Attach failed
STR3247E STRBFIFR: -STA TRA(PERF) command failed(timeout), shutdown in
progress. The Strobe log contains abend U0522 at 95DBF4BE
Strobe support Recommendations:
1) Set Strobe Parm to DB2IFIFLAG=0010 to enable Auto filtering of DB2
2) Apply PTF V60621A from Strobe maintenance file (42007). This PTF
addresses abend U0522 ABEND @ STRBGBES - STA TRACE(P) T
This PTF has also resolved similar CICS errors at some of our other accounts as well.
Both cases that were repported to us were resolved by it. They reported
that all their CICS' regions 'went down' during a Strobe measurement
because DB2 was not responding but recovered after Strobe ended. The
U0522 abend occurred as well. One of the cases also reported CICS
storage shortages during the measurement. PTF V60621A was the
resolution in each case.
After applying PTF V60621A and updating the DB2IFIFLAG to 0010 Strobe
must be recycled.
agburke 060001QPDN Visits (2978)
With the increased attention on auditing capabilities around the DB2 engine this new feature adds the capability to do some simple auditing using the system time capability of temporal tables, as well as with some generated expression columns.
APAR text: " The PTFs for PM99683 (the preconditioning APAR), PI15298 (the enabling APAR), and PI15666 (the LOAD utility feature APAR) deliver integrated auditing support using non-deterministic generated expression columns to allow for automatic tracking of some audit information including: a. who modified the data in the table b. what SQL operation modified the data in the table."
As a rudimentary example if you are looking at a table of account balances and on Monday an account is inserted with a Balance of $10,000 you would see it enter the base table.
BASE_TABLE on Monday
AUDIT_TABLE on Monday (empty)
Now someone else comes in and updates the balance raising it to $20,000 on Tuesday.
BASE_TABLE on Tuesday
AUDIT_TABLE on Tuesday
I have left off the mandatory beginning timestamp,ending timestamp, and trans_id columns for the system time to simplify the example.
KevinHarrison 1000005N52 Visits (2948)