Increase in CPU% of OMEGAMON XE 420 on z/OS TEMS when displaying ENQUEUEs
ReggieHubbard 060000P2W2 Visits (7142)
I am a Performance Engineer for a large financial institution which has market capitalization of over $43 billion dollars and generates more than $12 billion dollars in annual revenue. Additionally, we are one of the world's largest securities servicing provider and a top asset management firm. We use various Tivoli OMEGAMON XE 420 products, including OM XE for IMS, OM XE for CICS, OM XE for DB2, and OM XE on z/OS, to monitor our Production Sysplexes to ensure customer SLAs are being met.
The Operating System Support team recently upgraded one of our Quality PLEXs to z/OS 1.11 (HBB7760). We also use GRS only for serialization. Since this change, I have observed a CPU% increase to our OMEGAMON XE 420 on z/OS TEMS & OMEGAMON on z/OS Classic collector asids during 'peak' transaction volume processing.
During this 'peak' transaction window (13:30 - 16:30) we have various Operation and System personnel log'd in to our TEMS and Classic Collector looking specifically for ENQUEUES. They do this by displaying various UADVISOR workspaces from the TEMS and in Classic by accessing the #01 screen or execution of the E.A commands.
Is there anything you could recommend I try for this issue? - Confused in Cambridge
Dear Confused in Cambridge,,
The issue described above (displaying ENQUEUEs on the TEMS and/or Classic) impacts customers running z/OS 1.11 or above, whose configurations use GRS only for serialization. An increase in CPU% of GRS while processing GQSCAN or ISGQUERY REQINFO=QSCAN requests can occur. APAR OA33898 describes the issue. PTFs UA56634 (HBB7760) and UA56635 (HBB7770) are available to address this.
OA33898 could cause CPU% of OM XE 420 on z/OS to become increased due to calling GQSCAN to determine what ENQUEUES are present. OMEGAMON calls GQSCAN to get ENQUEUE information at least once a minute. Customers will experience measurably higher overhead from OMEGAMON XE for z/OS calling GQSCAN in the classic collector (RCOL), the CUA STC and the TEMS (DSST) address spaces without this APAR fix.
There are 4 HIPER PTFs that should be installed;
UA47588 - HIPER Reduce cpu consumption in non-proxy TEMS by using cheaper GQSCAN calls.
Problem: Every TEMS in the plex issues a GQSCAN call with XSYS=YES to gather all enqueue conflict information. This is a high overhead call, and is
Fix: The expensive XSYS=YES call only needs to be done by one TEMS in the plex to make sure that all conflicts are reported even if some lpars do
not have a TEMS online. Modified KOSGQSCA to only issue the XSYS=YES call if this is the Sysplex Proxy TEMS.
UA47907 - HIPER Reduce cpu in the TEMS by removing obsolete processing and restructuring existing code.
Problem: There are a number of obsolete and unnecessary OMEGAMON commands being issued in the KXDM3KCO persist table.
Fix: Modified KXDM3KCO and corresponding CSRC code to remove obsolete and unused M2 RULES. Also consolidated rules that issued the
MCPU and SYS/ENV commands multiple times per cycle.
UA54380 - HIPER Fix bug in USS gqscan processing in the TEMS
Problem: The code processed rc=4 form GQSCAN the same way a rc=0 was processed. When GQSCAN returns an rc=4 there are no ribes to be
processed and the answer area is not valid.
Fix: bypass processing enqueue data gqscan returns rc=4 (no ribes exist for the resource).
UA56141 - HIPER Fix storage creep on page dataset table.
Problem: Tables created by KM3KTAB only get freed after KM3KTAB returns an EOD indication for the table. KM3PDAG merges two tables of the
OM PART command output. When the first table is exhausted the final call to KM3KTAB was bypassed. After this workspace is refreshed many times,
there are many orphaned tables which causes high cpu in table processing.
Fix: Modify the code to perform the final KM3KTAB call.
You should also check for the latest PSP bucket info - http
Thanks to Reggie Hubbard and David Goldsmith, for thei