HELP WITH HANDLING THE RELEASE INCOMPATIBLE CHANGE FOR VARCHAR( DECIMAL), CAST (DECIMAL AS CHAR), CAST(DECIMAL AS VARCHAR)
Michael_D. 1100004WAH Visits (29376)
In Version 10, the formatting of decimal data has changed for the VARCHAR function and the CAST specification with decimal input and VARCHAR or CHAR output. There are two changes:
- Remove leading zero
Michael_D. 1100004WAH Visits (16174)
PM56845: PROVIDE OPTION FOR OPTIMIZE FOR 1 ROW TO ALLOW SOME SORT ACCESS PATHhtt
In all versions of DB2, the OPTIMIZE FOR 1 ROW clause requests DB2 to choose an access path that avoids a sort. In DB2 versions prior to 10, there is a possibility to obtain an access path with a sort even though that path is strongly discouraged. In DB2 10, DB2 will not compete access paths with sorts and will instead choose the lowest cost access path that does not require a sort.
This APAR provides an option to return to the previous version OPTIMIZE FOR 1 ROW behavior. As such, it does not eliminate the risk of an inefficient access path being chosen with OPTIMIZE FOR 1 ROW when the efficient access requires a sort. However, it does limit that exposure to what already existed in DB2 prior to DB2 10.
For queries that need sorts, the recommended solution is to avoid coding the OPTIMIZE FOR 1 ROW clause. Without the OPTIMIZE FOR 1 ROW clause, DB2 will choose access paths based on cost and will not make an effort to avoid sorts.
Local work around:
For queries that need sorts for efficient access, the solution is to avoid coding the OPTIMIZE FOR 1 ROW clause.
Change application to code OPTIMIZE FOR 2 ROWS
Wait for APAR PM56845 that is now open to provide option for OPTIMIZE FOR 1 ROW to allow sort access plans
Here is the story:
Now, what follows is NOT meant to sound like a sales pitch, but the correct technical solution is to add more zIIPs. It would be prudent to have the zIIPs run in the 30-50% CPU busy range on average (peaks would obviously be higher). zIIPs should be thought of as assist processors and they are not intended to be run as hard as GCPs. This recommendation is not specific to DB2 10, but with DB2 10, it has become even more critical. This is the option that Customer X decided to take - they have now refreshed their processor technology and increased the ratio of zIIP engines to GCP engines. Consequently the original elapsed time performance problems have gone away.
Michael_D. 1100004WAH Visits (8581)
timmzzz 060000RTR7 Visits (7842)
When running more than one DB2 z/OS subsystem on a single LPAR, there is a danger a "runaway" (e.g. storage leak) DB2 subsystem is taking out the LPAR. As this could cause a domino effect of outages caused by that single DB2 subsystem, REALSTORAGE_MAX subsystem parameter was introduced. This parameter was hidden ZPARM SPRMRSMX in DB2 V9 and now with PM24723/PK18354 becomes opaque. With DB2 10 you now have REAL
As mentioned above, it is recommend to set REALSTORAGE_MAX if more than one DB2 subsystems resides on a single LPAR. The best practices as of today are to set this parameter 1.5 to 2 x from the normal DB2 subsystem storage usage. If REALSTORAGE_MAX (DB2 10) or SPRMRSMX in DB2 9 is set to low, the DB2 subsystem might gets terminated before a real storage problem exists. If you set it to high, the LPAR might be gone before the parameter is being considered.
In addition new DB2 10 messages are going along with the actions
Michael_D. 1100004WAH Visits (7462)
PM24723: IFCID 225 REAL STORAGE STATISTICS ENHANCEMENTS IFCID225
DB2 APAR PM24723 is very important and supports REAL STOARAGE monitoring issue via a new extension to IFCID 225
See z/OS APAR z/OS APAR OA37821 and corresponding DB2 APAR PM49816 for this issue
OA37821: NEW FUNCTION IARV64 REQUEST=COUNTPAGES SUPPORT FOR UNSERIALIZED PROCESSING.
Useful commands monitoring the use of 1MB size real storage page frames on z10 and z196 :
DISPLAY BUFFERPOOL(BP1) SERVICE=4
Display provides output via DSNB999I messages of how many 1MB size page frames are being used
This z/OS command shows the total LFAREA, allocation and splits it across 4KB and 1MB size frames, via IAR019I message
DFSMS Striping was first introduced in 1993 and it was a success. Striping has served a dual purpose. The first purpose was to avoid hot spots on individual disks, and the second purpose was to increase sequential throughput for individual data streams. Since 1993, alternative techniques have been developed to avoid or dissipate a “hot spot” . Some examples:
Consequently, DFSMS striping is perhaps not needed or is not the best technique for avoiding hot spots.
IBM DB2 for z/OS Development used to recommend striping because parallelism helped over slow channels, but channel speed is now so high that the disadvantage of doing more I/Os tends to outweigh any advantage of parallelism.
Just because one particular workload might benefit from log striping, does not mean that no workloads will suffer.
DB2 uses various data transfer sizes. In all current versions of DB2, dynamic prefetch reads at least 128K, but DB2 utilities read or write at least 256K. Depending on the buffer pool size, sequential prefetch may read up to 256K and utilities may read and write up to 512K. However, if a DB2 query or Insert stream or utility is CPU bound, then striping will not make it faster. This is a common situation, except when creating or restoring image copies are concerned.
Striping also will not help if any of the components in the data paths are saturated. These paths consist of channels, host adapters, and the bus in each control unit. Take for example the DB2 RECOVER utility. When you want to recover an individual table space, the paths are not saturated and striping will reduce the time to restore the table space. However, if you want to recover the entire system, there will be many parallel RECOVER jobs which will use parallelism, and the paths may get saturated, in which case striping may well not reduce the time to restore the system.
A category of I/O that can possibly benefit from striping is list prefetch. Log apply makes extensive use of list prefetch. If a table space is striped across a single RAID rank, list prefetch may possibly improve modestly if the devices consist of HDDs , and may show more improvement if the devices consist of SSDs. Heavy parallelism tends to reduce the striping benefits with HDDs, but striping continues to do well with SSDs under heavy amounts of parallelism.
To summarize the value of striping for table spaces, the most benefit may occur with list prefetch when using SSDs, and the applications that benefit the most will likely be long running queries and utilities. Striping may help with RECOVER: restore from image copies, and to then apply log records. However, everyday workloads will not necessarily get any benefit from striping.
During normal OLTP operations, DB2 active logs are written, and are read only to archive the log records and for software replication. Most log writes are synchronous I/Os and each I/O consists of a small number of 4K pages. Faster channels has increased log bandwidth. However, most OLTP workloads never push the log throughput beyond 10 MB/sec. I/O response times for online transactions gradually degrade when log buffers queue up waiting for the previous I/O to complete.
As technology has evolved, many customers have used remote DASD replication to protect themselves against disasters. Remote replication is often the cause of I/O performance problems. Since all log writes caused by OLTP are synchronous, log I/O is very susceptible to problems caused by remote DASD replication. Each I/O introduces a risk of another delay, and since striping causes more I/Os, striping increases the risk of a delay. Remote replication is not the only thing that causes stress on a control unit, but it is one of the biggest causes.
Striping the active log may help the performance of active log reads as from DB2 9 as the I/O quantity for active log reads was increased from 12 pages to 120 pages. However, given 4 Gbps and faster links, the risks of striping the active logs will likely outweigh the risks - see below.
DFSMS striping of the DB2 active log is no longer a tuning option that DB2 for z/OS Development is actively testing and recommending. There have been 'hole in the recovery log stream' issues with the use of striping with DB2 10 for z/OS in failure scenarios involving albeit in a tiny number of customers. The problem is that there is no encapsulation of a striped I/O i.e, some stripes make it and some may not when there are failures. DFSMS striping has always worked that way. So we in DB2 for z/OS Development are now actively discouraging customers from using striping when the benefits are speculative at best. For customers who are already striping active log datasets there is performance risk in removing striping, but on the otherhand the chance of hitting above problem is very rare. But nevertheless there is risk. There is also risk in using striping with table spaces with large page size i.e., torned page as not all the stripes made it
As has been true in the past, hardware technology is forever changing and the value of DFSMS striping will probably continue to erode. Just because one particular workload may possibly benefit, does not mean that no workloads will suffer.
timmzzz 060000RTR7 Visits (6881)
Having a robust preventive maintenance process is a best practice in managing any IT
environment including z/OS. By avoiding known defects, which can have a major impact on
the functioning of the system, preventive maintenance improves availability. Having a
proactive preventive service strategy reduces the number of rediscovered defects and avoids
unplanned outages. IBM recommends that preventive maintenance be installed at least two to
four times a year. In addition, IBM recommends that HIPER, PE Fix, Security/Integrity and
Pervasive PTFs be installed more frequently.
Read more about a solid preventive maintenance strategy here.
Additional information about the RSU and CST, can be found at the CST website at:
agburke 060001QPDN Visits (6751)
We have had several questions lately around z/OS 1.12 and the performance benefits seen by DB2.
A customer of mine first saw a performance degradation with z/OS 1.12 and DB2 9 with a DGTT that had many indexes defined on it. Basically any query that created and dropped many temporary objects could see a CPU increase.
PM17542 was created to avoid this overhead. Unfortunately it was marked P.E., or PTF in error and several z/OS and DB2 APARs were to follow.
Today we are back with a positive story as described in PM46045 which also summarizes the APARs that should be applied to see a performance benefit in DB2.
agburke 060001QPDN Visits (6477)
High Performance DBATs were introduce in DB2 10. In order to utilize this feature you must have the JCC packages (i.e. SYSNL200) bound with RELEASE(DEALLOCATE) as well as the -MODIFY DDF PKGREL(BNDOPT) option in place.
This allows distributed requests to benefit from the performance aspects of not going through deallocation after each commit. Caution should be used when employing this option, as you would not want every distributed application coming in as RELEASE(DEALLOCATE) and using up all of the available DBATs. You can more granularly control these If you bind the dynamic JCC packages into an alternate collection and then allow specific applications to use this by having them specify this collection in the CurrentPackageSet.
APAR PI20352 was opened because there were times of increased DDF SRB time seen when the thread was deallocated prior to the 200 uses. In order to alleviate this, code has been modified to allow the High Performance DBAT to be pooled if it has not reach the 200 use mark. The POOLINAC timeout value can be used to limit the time the DBAT remains in the pool.
flodubois 270000K6H5 Visits (6380)
Fix Category HOLDDATA (FIXCAT)
What is a fix category? A fix category is an identifier used to group and associate PTFs to a particular category of software fixes. A fix category might be used to identify a group of fixes that are required to support a particular hardware device, or to provide a particular software function. Fix categories are supplied to you in the form of SMP/E FIXCAT HOLDDATA statements. Each FIXCAT HOLDDATA statement associates an APAR and its related fixing PTF to one or more fix categories.
For more information , please refer to http
Current list of FIXCAT HOLDs for DB2 for z/OS
SYSPLEXDS Fixes that enable the sysplex data sharing function or fixes required for data sharing
DB2COEXIST/K Fixes that enable DB2 releases to coexist when in data sharing mode
DB2PARALL/K Fixes for problems with the DB2 parallelism function
DB2STGLK/K Fixes for DB2 storage leak problems
DB2OVRLAY/K Fixes for DB2 storage overlay problems
DB2INCORR/K Fixes for DB2 SQL incorrect output problems
DB2MIGV9/K Fixes that allow the prior release of DB2 to migrate to or fall back from DB2 V9
DB2MIGV10/K Fixes that allow prior releases of DB2 to migrate to or fall back from DB2 V10
ISAOPT/K Fixes required for z/OS software products to support the IBM Smart Analytics Optimizer
Michael_D. 1100004WAH Visits (6257)
End of Service Announcement DB2 9 for z/OS - June 27, 2014
On February 7, 2012, IBM announced the End of Service (EOS) for DB2 9 for z/OS. The effective EOS date is June 27, 2014.
For your convenience, here is a link to the announcement letter:
Michael_D. 1100004WAH Visits (6090)
PTF UK91435 (APAR PM79520)
is required for DB2 10 for z/OS customers in a Data Sharing environment
PTF UK91435 is required for DB2 10 for z/OS customers
DB2 10 re-introduced the ability for remote connections to benefit from the dynamic JCC packages being bound with RELE
1) The package that the remote application will use must be bound with RELEASE(DEALLOCATE) i.e. SYSLN200 for cursor stability without HOLD and SYSLH200 for cursor stability WITH HOLD. We would not suggest rebinding all of the dynamic JCC packages that are used by all applicaitons to use DEALLOCATE. You should bind a copy of the packages under another collection (other than NULLID), and use the currentPackageSet special register to point those privileged applications to those packages.
2) To enable or disable DDF from treating the packages as DEALLOCATE you would use the -MODIFY DDF PKGREL(BNDOPT) command to turn on and -MODIFY DDF PKGREL(COMMIT) to turn off the high performance DBAT behavior. This allows you to force the packages to behave as RELEASE(COMMIT) when you need DDL, BINDs, or utilities to be able to break-in.
As a safety valve the high performance DBATs could only be reused 200 times before they were destroyed, in order to avoid thread footprint growth, and release certain locks.
None of this is new, but IBM released PI20352 (UI19900) mid-2014 to help scenarios where the connections were dropping before the thread was reused 200 times. When this occurred the DIST non-preemptible SRB cpu time could be driven up and the performance benefit of high performance DBATS would degrade. PI20352 allowed high performance DBATs to be pooled, and reused.
However it was discovered that when these pooled DBATs were reused by a new request from a different application some of the locks could persist from the previous thread. Due to these package allocation and table space intent locks hanging around IBM has now introduced the -MODIFFY DDF PKGREL(BNDPOOL) option with PI31597. This option is for customers that want the DBATs to be pooled in case the connection goes away. Meaning you want the same application to come in and reuse that DBAT because the locks could persist on that thread.
Here is a description for the 3 options for PKGREL.
With PI31597 there is another option for the -MODIFY DDF PKGREL() command:
BNDPOOL - New - The rules of the RELEASE bind option that was specified when the package was bound are applied to any package that is used for remote client processing. The high performance DBAT used by a terminating client connection will be pooled. BNDOPT - if you have PI20352 applied WITHOUT PI31597 then the BNDOPT behavior is like that of BNDPOOL where the DBAT will be pooled, possibly leading to the accumulation of locks. - The rules of the RELEASE bind option that was specified when the package was bound are applied to any package that is used for remote client processing. The high performance DBAT used by a terminating client connection will be deallocated. BNDOPT is the default value of the MODIFY DDF PKGREL command. COMMIT - The rules of the RELEASE(COMMIT) bind option are applied to any package that is used for remote client processing. COMMIT is the default value when the CMTSTAT subsystem parameter is set to ACTIVE. If the MODIFY DDF PKGREL command had never been issued, then COMMIT is the default value. No high performance DBATs exist while the PKGREL option is set to COMMIT.
So how do I know which option to use?
The 2 key points here are whether or not the application does more than 200 commits before deallocating or disconnecting, AND whether or not you have segregated different applications to different collections for the DEALLOCATE packages.
To determine such candidates you can look at IFCID 365, as below and compare the 'INITIATED FROM REMOTE SITE' vs. the 'DEALLOCATED CONVERSATIONS'. If the ratio is 200:1 or greater then it is a great candidate for DEALLOCATE in general.
You can also look in the accounting short report and compare the #DDFS (occurrences) vs. the CONV (conversations). Here we see 244 occurrences and 0 conversations ended, so it would be a prime candidate.
Given you know the behavior of the application as shown above the settings for PKGREL() could be:
COMMIT - is the default and you are not utilizing High Performance DBATs
BNDOPT - if you your applications do not deal
BNDPOOL - if the applications do not reuse the thread 200 times as shown above, but there is only 1 application per JCC package collection (not using the default NULLID collection name) then it would not matter as much that the locks persist across threads, because they are accessing the same objects. However you would get the CPU savings of having the threads pooled so the thread could actually be reused 200 times before it is destroyed.
GZJ 1100006WMT Visits (5956)
It is a common belief that DB2 10 and 11 for z/OS can only use 1MB size pageable large pages, other than for buffers, if the CEC has Flash Express installed. For example, the text for APAR PM85944 infers just that. However, this is not true. The only requirement is that the CEC be SCM-capable (SCM = Storage-Class Memory). In other words, it does not matter whether Flash Express is actually installed or not. So if a customer is running on a zEC12 CEC without Flash Express, DB2 could request, and be given, a 1MB size pageable large page, residing in a 1MB size large page frame. However if that page needs to get paged out for some reason, at that point it will be broken down into 4KB page frames. To put it another way, pageable large pages are available on a zEC12 capable CEC with the caveat that if Flash Express is not installed, then if those pages are ever paged out they will be demoted to 4KB page frames. 1MB pageable large pages that are demoted to 4KB page frames as they paged out will never be coalesced: that is, they will remain 4KB page frames for the remaining life of the IPL.