Michael_D. 1100004WAH Visits (3697)
PM42528: SUPPORT FOR DELETING A MEMBER FROM A DATA SHARING GROUP
To use this support:
- The APAR or PTF providing delete member support must be applied to all members. Since deleting a member requires all members to be stopped, there is no pre-conditioning APAR / PTF.
- The member being deleted must be quiesced with no outstanding units of work, active utilities or retained locks.
- There should be no objects in restricted states. Use the -DISPLAY DATABASE(*) RESTRICT to verify.
- All surviving members must be DB2 10 New Function Mode
- The member to be deleted must be quiesced at some point before the surviving members are stopped so that the quiesced state is saved in all the surviving members' BSDSs.
- Stop all members of the data sharing group.
- Make backup copies of all BSDSs.
- Run the change log inventory (DSNJU003) DELMBR control statement against all the group members' BSDSs to deactivate the member that is to be deleted.
- Restart the surviving members of the group.
- When the logs of the member to be deleted are no longer needed, proceed.
- Stop all members of the data sharing group.
- Make backup copies of the BSDSs.
- Run the change log inventory (DSNJU003) DELMBR command against all the group members' BSDSs to destroy the member that is to be deleted.
- Restart the surviving members of the group.
- After all surviving members have been restarted, the logs and BSDS of the deleted member are no longer needed.
DELETE data sharing member related APAR's and PTF's
APAR : PM31003, PM31004, PM31006, PM31009
PTF : UK67512, UK67958, UK69286, UK65750
Michael_D. 1100004WAH Visits (2691)
DB2 9, which GA'd back in March 2007, is replaced by DB2 10 for z/OS (5605-DB2). For further information, please refer to the announcement letters accessible via these links:
Michael_D. 1100004WAH Visits (4270)
End of Service Announcement DB2 9 for z/OS - June 27, 2014
On February 7, 2012, IBM announced the End of Service (EOS) for DB2 9 for z/OS. The effective EOS date is June 27, 2014.
For your convenience, here is a link to the announcement letter:
Michael_D. 1100004WAH Visits (3095)
System z Integrated Information Processor (zIIP)
The IBM zIIP is available on all IBM zEnterprise, IBM System z10 and IBM System z9 servers. The zIIP can support many technologies, and can help implement, integrate, and optimize new workloads on System z.
DB2 z/OS zIIP related performance:
PM12256: DRDA PERFORMANCE IMPROVEMENT USING TCP/IP
OA35146: NEW FUNCTION - ALLOW NON-CLIENT PREEMPTABLE SRBS TO JOIN/LEAVE AN ENCLAVE AFTER IT HAS BEGUN – z/OS APAR for PM12256PM2
DB2 z/OS informational APAR II14219 (DB2 z/OS ZIIP EXPLOITATION "SUPPORT USE" INFORMATION)
Michael_D. 1100004WAH Visits (5917)
Michael_D. 1100004WAH Visits (5418)
PM24723: IFCID 225 REAL STORAGE STATISTICS ENHANCEMENTS IFCID225
DB2 APAR PM24723 is very important and supports REAL STOARAGE monitoring issue via a new extension to IFCID 225
See z/OS APAR z/OS APAR OA37821 and corresponding DB2 APAR PM49816 for this issue
OA37821: NEW FUNCTION IARV64 REQUEST=COUNTPAGES SUPPORT FOR UNSERIALIZED PROCESSING.
Useful commands monitoring the use of 1MB size real storage page frames on z10 and z196 :
DISPLAY BUFFERPOOL(BP1) SERVICE=4
Display provides output via DSNB999I messages of how many 1MB size page frames are being used
This z/OS command shows the total LFAREA, allocation and splits it across 4KB and 1MB size frames, via IAR019I message
Michael_D. 1100004WAH Visits (11412)
PM56845: PROVIDE OPTION FOR OPTIMIZE FOR 1 ROW TO ALLOW SOME SORT ACCESS PATHhtt
In all versions of DB2, the OPTIMIZE FOR 1 ROW clause requests DB2 to choose an access path that avoids a sort. In DB2 versions prior to 10, there is a possibility to obtain an access path with a sort even though that path is strongly discouraged. In DB2 10, DB2 will not compete access paths with sorts and will instead choose the lowest cost access path that does not require a sort.
This APAR provides an option to return to the previous version OPTIMIZE FOR 1 ROW behavior. As such, it does not eliminate the risk of an inefficient access path being chosen with OPTIMIZE FOR 1 ROW when the efficient access requires a sort. However, it does limit that exposure to what already existed in DB2 prior to DB2 10.
For queries that need sorts, the recommended solution is to avoid coding the OPTIMIZE FOR 1 ROW clause. Without the OPTIMIZE FOR 1 ROW clause, DB2 will choose access paths based on cost and will not make an effort to avoid sorts.
Local work around:
For queries that need sorts for efficient access, the solution is to avoid coding the OPTIMIZE FOR 1 ROW clause.
Change application to code OPTIMIZE FOR 2 ROWS
Wait for APAR PM56845 that is now open to provide option for OPTIMIZE FOR 1 ROW to allow sort access plans
Michael_D. 1100004WAH Visits (3186)
Higher CF CPU utilization in a DB2 10 for z/OS data sharing environment during Delete_Name processing. Delete_Name requests/sec can be significantly higher in DB2 10 for z/OS than in DB2 9 for z/OS. In V10, when the pageset/partition becomes non GBP dependent, the Delete_Name process deletes only data entries to avoid cross invalidation processing at that time, and to allow cleanup of the directory
entries later when other pages are registered. The V9 Delete_Name process deleted both data and directory entries. In DB2 10 for z/OS, the Delete_Name requests/sec can be significantly higher because CFCC processing is not as efficient when only data entries are deleted.
PM51467: CF DELETE_NAME PERFORMANCE IN DB2 10 FOR Z/OS http
Michael_D. 1100004WAH Visits (3859)
PM31614: PERFORMANCE OF PACKAGE ALLOCATION IMPROVEMENT http
Some of the code was moved to a service task to reduce the CPU time required for packages that have a very short
running SQL statements or that issue frequent commits.
Michael_D. 1100004WAH Visits (3555)
DB2 10 can reduce the total DB2 CPU demand from 5-20% when you take advantage of all the enhancements. Many CPU reductions are built in directly to DB2, requiring no application changes. Some enhancements are implemented through normal DB2 activities through rebinding, restructuring database definitions, improving applications, and utility processing. The CPU demand reduction features have the potential to provide significant total cost of ownership savings based on the application mix and transaction types.
Improvements in optimization reduce costs by processing SQL automatically with more efficient data access paths. Improvements through a range-list index scan access method, list prefetch for IN-list, more parallelism for select and index insert processing, better work file usage, better record identifier (RID) pool overflow management, improved sequential detection, faster log I/O, access path certainty evaluation for static SQL, and improved distributed data facility (DDF) transaction flow all provide more efficiency without changes to applications. These enhancements can reduce total CPU enterprise costs because of improved efficiency in the DB2 10 for z/OS.
DB2 10 includes numerous performance enhancements for Large Objects (LOBs) that save disk space for small LOBs and that provide dramatically better performance for LOB retrieval, inserts, load, and import/export using DB2 utilities. DB210 can also more effectively REORG partitions that contain LOBs.
This IBM Redbooks® publication® provides an overview of the performance impact of DB2 10 for z/OS discussing the overall performance and possible impacts when moving from version to version. We include performance measurements that were made in the laboratory and provide some estimates.
Keep in mind that your results are likely to vary, as the conditions and work will differ.
In this book, we assume that you are somewhat familiar with DB2 10 for z/OS.
See DB2 10 for z/OS Technical Overview, SG24-7892-00, for an introduction to the new functions.
agburke 060001QPDN Visits (2916)
First is the Red Alert for PM51093 that was recently posted on the Red Alert website: http
Other potential data loss HIPERs include:
Correction: APAR remains open. Corrective relief, AM55070, is available from DB2 Technical Support. The estimated PTF availability date is April 6.
Notes: This is a problem specific to freespace reuse and later rollback. There should be no latent data corruption. i.e. the only way a customer would have data corruption is if they actually encountered an abend during rollback, in which case the page would be marked broken and they should report the problem to IBM Support.
There is no need for customers to proactively check for problems. If such is desired, then any utility such as DSN1COPY, REORG, COPY or any application access would report a page marked broken.
Correction: PTF UK76352 is available (not yet RSU)
Notes: There is no actual data loss, however, the potential exists for corrupted data pages and orphan pointer records. Customers wishing to proactively check for this condition can use DSN1COPY with the CHECK option, which runs offline and is non-disruptive.
However, the IBM COPY utility would automatically detect corrupted data pages, so a normal backup cycle should be sufficient to validate data. If DB2 detects a problem, then the page may be marked broken. This can be reset by REPAIR using REPAIR LOCATE db.ts PAGE(nnnn) RESET. The IBM REORG utility will correct the page corruption.
agburke 060001QPDN Visits (2328)
Many customers run into confusion when level 2 asks for the 'Query Environment Capture' in a PMR. We called this 'Service SQL' in Optimization Service Center, and could also be gathered by DSNPLI8 prior to V10.
They are referring to the output from the Data Studio tool, which has a button for this, or the output from the ADMIN_INFO_SQL stored procedure.
All the methods for capturing documentation regarding access path degradation to send to level 2 are documented here:
The following developerWorks article outlines the stored procedure's uses and some examples, and goes above and beyond the documentation in the manuals:
Data Studio v3.1 incorporates this procedure into a GUI.
Here is a link to a step-by-step guide on getting started with Data Studio and query tuning. It says v2.2.1 but it is applicable to v3.1 as well (and has pictures).
If the tool attaches to DB2 V10 subsystem it will use the ADMIN_INFO_SQL procedure by default. But if you connect to a DB2 V9 subsystem with the tool you need to select that stored procedure when you do a 'Capture Query Environment'. While in the 'Capture' tab and you have the SQL in the text box, you click on the 'Capture Query Environment' button. Once in the Report Options window you need to check the box which says 'Use the following stored procedure to collect data about the query environment'. This way all the doc is consistent.
agburke 060001QPDN Visits (1785)
The DB2 10 for z/OS and IRLM 2.3 product tapes have been refreshed. All DB2 10 for z/OS and IRLM 2.3 PTFs that were closed and available up to and including December 1, 2011, (Service level PDO 1148) have been integrated into refreshed product tapes. These product tapes are available for new orders placed on or after January 27, 2012.
agburke 060001QPDN Visits (3683)
Workfiles have changed quite a bit from V8 -> V9. When we moved from DB2 V8 to V9 we combined the Temp. database (DGTTs) and Workfile database, and began favoring 32k table spaces if the records were over 100 bytes in length. Customers faced issues with runaway DGTTs eating up valuable workfile table spaces and impeding other work.
So IBM introduced a zParm to reestablish a deliniation between the two uses.
We then came out with a best practices informational APAR based on what customers had seen.
With the advent of DB2 10 V8 customers were introduced to this zParm after the fact, and were not always ready for the implications.
As part of the best practices we suggest when going into V9 or V10 size the 4k workfile buffer pool (BP7?) and the 32k workfile buffers (BP32k7?) equally in CM mode. You will need to create more 32k workfile tablespaces as well sinc not just DGTTs will use these now. The amount of space used by 4k and 32k tables for workfiles, as well as when DB2 wanted a 32k table, but was not able to get one is exposed in the Statistics report.
Once you get a handle on how much the 32k tables are used, increase them. In the field a 75 / 25 split is usually seen where 75% of the time 32k tables are favored, and hence 75% of the workfile space should be allocated to 32k table spaces.
In V10 DGTTs favor Partition By Growth tables first, then table spaces with >0 secondary quatities, then lastly those with 0 secondary quatities. So as a soft separation you might want to add some PBGs to the workfile database if DGTTs have been an issue.
agburke 060001QPDN Visits (1960)
The zIIP eligibility field representing zIIP eligible work that executed on a general CP was deprecated in DB2 10. So in the accounting fields for SECP CPU, or Specialty Engine executed on a Central Processor, you will see '0' now. That is until PM57206 is applied and reinstates that field. The field was removed due to the difficulty in accurately accounting for the associated enclaves. It was brought back by popular demand due to customer charge-back practices.