HELP WITH HANDLING THE RELEASE INCOMPATIBLE CHANGE FOR VARCHAR( DECIMAL), CAST (DECIMAL AS CHAR), CAST(DECIMAL AS VARCHAR)
Michael_D. 1100004WAH Visits (24774)
In Version 10, the formatting of decimal data has changed for the VARCHAR function and the CAST specification with decimal input and VARCHAR or CHAR output. There are two changes:
- Remove leading zero
Michael_D. 1100004WAH Visits (13310)
PM56845: PROVIDE OPTION FOR OPTIMIZE FOR 1 ROW TO ALLOW SOME SORT ACCESS PATHhtt
In all versions of DB2, the OPTIMIZE FOR 1 ROW clause requests DB2 to choose an access path that avoids a sort. In DB2 versions prior to 10, there is a possibility to obtain an access path with a sort even though that path is strongly discouraged. In DB2 10, DB2 will not compete access paths with sorts and will instead choose the lowest cost access path that does not require a sort.
This APAR provides an option to return to the previous version OPTIMIZE FOR 1 ROW behavior. As such, it does not eliminate the risk of an inefficient access path being chosen with OPTIMIZE FOR 1 ROW when the efficient access requires a sort. However, it does limit that exposure to what already existed in DB2 prior to DB2 10.
For queries that need sorts, the recommended solution is to avoid coding the OPTIMIZE FOR 1 ROW clause. Without the OPTIMIZE FOR 1 ROW clause, DB2 will choose access paths based on cost and will not make an effort to avoid sorts.
Local work around:
For queries that need sorts for efficient access, the solution is to avoid coding the OPTIMIZE FOR 1 ROW clause.
Change application to code OPTIMIZE FOR 2 ROWS
Wait for APAR PM56845 that is now open to provide option for OPTIMIZE FOR 1 ROW to allow sort access plans
Here is the story:
Now, what follows is NOT meant to sound like a sales pitch, but the correct technical solution is to add more zIIPs. It would be prudent to have the zIIPs run in the 30-50% CPU busy range on average (peaks would obviously be higher). zIIPs should be thought of as assist processors and they are not intended to be run as hard as GCPs. This recommendation is not specific to DB2 10, but with DB2 10, it has become even more critical. This is the option that Customer X decided to take - they have now refreshed their processor technology and increased the ratio of zIIP engines to GCP engines. Consequently the original elapsed time performance problems have gone away.
Michael_D. 1100004WAH Visits (6787)
timmzzz 060000RTR7 Visits (6222)
When running more than one DB2 z/OS subsystem on a single LPAR, there is a danger a "runaway" (e.g. storage leak) DB2 subsystem is taking out the LPAR. As this could cause a domino effect of outages caused by that single DB2 subsystem, REALSTORAGE_MAX subsystem parameter was introduced. This parameter was hidden ZPARM SPRMRSMX in DB2 V9 and now with PM24723/PK18354 becomes opaque. With DB2 10 you now have REAL
As mentioned above, it is recommend to set REALSTORAGE_MAX if more than one DB2 subsystems resides on a single LPAR. The best practices as of today are to set this parameter 1.5 to 2 x from the normal DB2 subsystem storage usage. If REALSTORAGE_MAX (DB2 10) or SPRMRSMX in DB2 9 is set to low, the DB2 subsystem might gets terminated before a real storage problem exists. If you set it to high, the LPAR might be gone before the parameter is being considered.
In addition new DB2 10 messages are going along with the actions
Michael_D. 1100004WAH Visits (5957)
PM24723: IFCID 225 REAL STORAGE STATISTICS ENHANCEMENTS IFCID225
DB2 APAR PM24723 is very important and supports REAL STOARAGE monitoring issue via a new extension to IFCID 225
See z/OS APAR z/OS APAR OA37821 and corresponding DB2 APAR PM49816 for this issue
OA37821: NEW FUNCTION IARV64 REQUEST=COUNTPAGES SUPPORT FOR UNSERIALIZED PROCESSING.
Useful commands monitoring the use of 1MB size real storage page frames on z10 and z196 :
DISPLAY BUFFERPOOL(BP1) SERVICE=4
Display provides output via DSNB999I messages of how many 1MB size page frames are being used
This z/OS command shows the total LFAREA, allocation and splits it across 4KB and 1MB size frames, via IAR019I message
DFSMS Striping was first introduced in 1993 and it was a success. Striping has served a dual purpose. The first purpose was to avoid hot spots on individual disks, and the second purpose was to increase sequential throughput for individual data streams. Since 1993, alternative techniques have been developed to avoid or dissipate a “hot spot” . Some examples:
Consequently, DFSMS striping is perhaps not needed or is not the best technique for avoiding hot spots.
IBM DB2 for z/OS Development used to recommend striping because parallelism helped over slow channels, but channel speed is now so high that the disadvantage of doing more I/Os tends to outweigh any advantage of parallelism.
Just because one particular workload might benefit from log striping, does not mean that no workloads will suffer.
DB2 uses various data transfer sizes. In all current versions of DB2, dynamic prefetch reads at least 128K, but DB2 utilities read or write at least 256K. Depending on the buffer pool size, sequential prefetch may read up to 256K and utilities may read and write up to 512K. However, if a DB2 query or Insert stream or utility is CPU bound, then striping will not make it faster. This is a common situation, except when creating or restoring image copies are concerned.
Striping also will not help if any of the components in the data paths are saturated. These paths consist of channels, host adapters, and the bus in each control unit. Take for example the DB2 RECOVER utility. When you want to recover an individual table space, the paths are not saturated and striping will reduce the time to restore the table space. However, if you want to recover the entire system, there will be many parallel RECOVER jobs which will use parallelism, and the paths may get saturated, in which case striping may well not reduce the time to restore the system.
A category of I/O that can possibly benefit from striping is list prefetch. Log apply makes extensive use of list prefetch. If a table space is striped across a single RAID rank, list prefetch may possibly improve modestly if the devices consist of HDDs , and may show more improvement if the devices consist of SSDs. Heavy parallelism tends to reduce the striping benefits with HDDs, but striping continues to do well with SSDs under heavy amounts of parallelism.
To summarize the value of striping for table spaces, the most benefit may occur with list prefetch when using SSDs, and the applications that benefit the most will likely be long running queries and utilities. Striping may help with RECOVER: restore from image copies, and to then apply log records. However, everyday workloads will not necessarily get any benefit from striping.
During normal OLTP operations, DB2 active logs are written, and are read only to archive the log records and for software replication. Most log writes are synchronous I/Os and each I/O consists of a small number of 4K pages. Faster channels has increased log bandwidth. However, most OLTP workloads never push the log throughput beyond 10 MB/sec. I/O response times for online transactions gradually degrade when log buffers queue up waiting for the previous I/O to complete.
As technology has evolved, many customers have used remote DASD replication to protect themselves against disasters. Remote replication is often the cause of I/O performance problems. Since all log writes caused by OLTP are synchronous, log I/O is very susceptible to problems caused by remote DASD replication. Each I/O introduces a risk of another delay, and since striping causes more I/Os, striping increases the risk of a delay. Remote replication is not the only thing that causes stress on a control unit, but it is one of the biggest causes.
Striping the active log may help the performance of active log reads as from DB2 9 as the I/O quantity for active log reads was increased from 12 pages to 120 pages. However, given 4 Gbps and faster links, the risks of striping the active logs will likely outweigh the risks - see below.
DFSMS striping of the DB2 active log is no longer a tuning option that DB2 for z/OS Development is actively testing and recommending. There have been 'hole in the recovery log stream' issues with the use of striping with DB2 10 for z/OS in failure scenarios involving albeit in a tiny number of customers. The problem is that there is no encapsulation of a striped I/O i.e, some stripes make it and some may not when there are failures. DFSMS striping has always worked that way. So we in DB2 for z/OS Development are now actively discouraging customers from using striping when the benefits are speculative at best. For customers who are already striping active log datasets there is performance risk in removing striping, but on the otherhand the chance of hitting above problem is very rare. But nevertheless there is risk. There is also risk in using striping with table spaces with large page size i.e., torned page as not all the stripes made it
As has been true in the past, hardware technology is forever changing and the value of DFSMS striping will probably continue to erode. Just because one particular workload may possibly benefit, does not mean that no workloads will suffer.
timmzzz 060000RTR7 Visits (5243)
Having a robust preventive maintenance process is a best practice in managing any IT
environment including z/OS. By avoiding known defects, which can have a major impact on
the functioning of the system, preventive maintenance improves availability. Having a
proactive preventive service strategy reduces the number of rediscovered defects and avoids
unplanned outages. IBM recommends that preventive maintenance be installed at least two to
four times a year. In addition, IBM recommends that HIPER, PE Fix, Security/Integrity and
Pervasive PTFs be installed more frequently.
Read more about a solid preventive maintenance strategy here.
Additional information about the RSU and CST, can be found at the CST website at:
agburke 060001QPDN Visits (5119)
We have had several questions lately around z/OS 1.12 and the performance benefits seen by DB2.
A customer of mine first saw a performance degradation with z/OS 1.12 and DB2 9 with a DGTT that had many indexes defined on it. Basically any query that created and dropped many temporary objects could see a CPU increase.
PM17542 was created to avoid this overhead. Unfortunately it was marked P.E., or PTF in error and several z/OS and DB2 APARs were to follow.
Today we are back with a positive story as described in PM46045 which also summarizes the APARs that should be applied to see a performance benefit in DB2.
flodubois 270000K6H5 Visits (4693)
Fix Category HOLDDATA (FIXCAT)
What is a fix category? A fix category is an identifier used to group and associate PTFs to a particular category of software fixes. A fix category might be used to identify a group of fixes that are required to support a particular hardware device, or to provide a particular software function. Fix categories are supplied to you in the form of SMP/E FIXCAT HOLDDATA statements. Each FIXCAT HOLDDATA statement associates an APAR and its related fixing PTF to one or more fix categories.
For more information , please refer to http
Current list of FIXCAT HOLDs for DB2 for z/OS
SYSPLEXDS Fixes that enable the sysplex data sharing function or fixes required for data sharing
DB2COEXIST/K Fixes that enable DB2 releases to coexist when in data sharing mode
DB2PARALL/K Fixes for problems with the DB2 parallelism function
DB2STGLK/K Fixes for DB2 storage leak problems
DB2OVRLAY/K Fixes for DB2 storage overlay problems
DB2INCORR/K Fixes for DB2 SQL incorrect output problems
DB2MIGV9/K Fixes that allow the prior release of DB2 to migrate to or fall back from DB2 V9
DB2MIGV10/K Fixes that allow prior releases of DB2 to migrate to or fall back from DB2 V10
ISAOPT/K Fixes required for z/OS software products to support the IBM Smart Analytics Optimizer
Michael_D. 1100004WAH Visits (4682)
End of Service Announcement DB2 9 for z/OS - June 27, 2014
On February 7, 2012, IBM announced the End of Service (EOS) for DB2 9 for z/OS. The effective EOS date is June 27, 2014.
For your convenience, here is a link to the announcement letter:
Michael_D. 1100004WAH Visits (4642)
PTF UK91435 (APAR PM79520)
is required for DB2 10 for z/OS customers in a Data Sharing environment
PTF UK91435 is required for DB2 10 for z/OS customers
agburke 060001QPDN Visits (4344)
High Performance DBATs were introduce in DB2 10. In order to utilize this feature you must have the JCC packages (i.e. SYSNL200) bound with RELEASE(DEALLOCATE) as well as the -MODIFY DDF PKGREL(BNDOPT) option in place.
This allows distributed requests to benefit from the performance aspects of not going through deallocation after each commit. Caution should be used when employing this option, as you would not want every distributed application coming in as RELEASE(DEALLOCATE) and using up all of the available DBATs. You can more granularly control these If you bind the dynamic JCC packages into an alternate collection and then allow specific applications to use this by having them specify this collection in the CurrentPackageSet.
APAR PI20352 was opened because there were times of increased DDF SRB time seen when the thread was deallocated prior to the 200 uses. In order to alleviate this, code has been modified to allow the High Performance DBAT to be pooled if it has not reach the 200 use mark. The POOLINAC timeout value can be used to limit the time the DBAT remains in the pool.
GZJ 1100006WMT Visits (4299)
It is a common belief that DB2 10 and 11 for z/OS can only use 1MB size pageable large pages, other than for buffers, if the CEC has Flash Express installed. For example, the text for APAR PM85944 infers just that. However, this is not true. The only requirement is that the CEC be SCM-capable (SCM = Storage-Class Memory). In other words, it does not matter whether Flash Express is actually installed or not. So if a customer is running on a zEC12 CEC without Flash Express, DB2 could request, and be given, a 1MB size pageable large page, residing in a 1MB size large page frame. However if that page needs to get paged out for some reason, at that point it will be broken down into 4KB page frames. To put it another way, pageable large pages are available on a zEC12 capable CEC with the caveat that if Flash Express is not installed, then if those pages are ever paged out they will be demoted to 4KB page frames. 1MB pageable large pages that are demoted to 4KB page frames as they paged out will never be coalesced: that is, they will remain 4KB page frames for the remaining life of the IPL.
agburke 060001QPDN Visits (4242)
Workfiles have changed quite a bit from V8 -> V9. When we moved from DB2 V8 to V9 we combined the Temp. database (DGTTs) and Workfile database, and began favoring 32k table spaces if the records were over 100 bytes in length. Customers faced issues with runaway DGTTs eating up valuable workfile table spaces and impeding other work.
So IBM introduced a zParm to reestablish a deliniation between the two uses.
We then came out with a best practices informational APAR based on what customers had seen.
With the advent of DB2 10 V8 customers were introduced to this zParm after the fact, and were not always ready for the implications.
As part of the best practices we suggest when going into V9 or V10 size the 4k workfile buffer pool (BP7?) and the 32k workfile buffers (BP32k7?) equally in CM mode. You will need to create more 32k workfile tablespaces as well sinc not just DGTTs will use these now. The amount of space used by 4k and 32k tables for workfiles, as well as when DB2 wanted a 32k table, but was not able to get one is exposed in the Statistics report.
Once you get a handle on how much the 32k tables are used, increase them. In the field a 75 / 25 split is usually seen where 75% of the time 32k tables are favored, and hence 75% of the workfile space should be allocated to 32k table spaces.
In V10 DGTTs favor Partition By Growth tables first, then table spaces with >0 secondary quatities, then lastly those with 0 secondary quatities. So as a soft separation you might want to add some PBGs to the workfile database if DGTTs have been an issue.