Sitworld: Event History #15 High Results Situation to No Purpose
John Alvord, IBM Corporation
Draft #1 – 25 May 2018 - Level 1.00000
The Event History Audit project is complex to grasp as a whole. The following series of posts will track individual diagnostic efforts and how the new project aided in the process.
This was seen in the Summary Section
Total Result Bytes: 1023369249 989.47 K/min Worry[197.89%]
This environment is receiving almost one megabyte of results data per minute. Experience has shown that problems often occur if the result rate is over 500K per minute. That is the source of the "worry" percentage. Your mileage may vary based on the server running the workload. Even if the server can handle the workload it is never a good idea to perform useless work.
This was seen in the Report011: Event/Results Budget Situations Report by Result Bytes
EVENTREPORT011: Event/Results Budget Situations Report by Result Bytes
deb_prccpu_xuxw_aix,UNIXPS,2784,60,978,13.46%,0.97,269576,750499584,73.34%,0,0,0,0,0,0,269576,750499584,0,0,978,39,*IF *VALUE Process.CPU_Pct *GE 1.00 *AND *VALUE Process.Process_Command_U *EQ '/opt/BESClient/bin/BESClient' *AND *VALUE Process.CPU_Pct *LT 4.00,
So there is a situation deb_prccpu_xuxw_aix which runs every 60 seconds and checks for one process and alerts the CPU% is between 1% and 4%. It runs on 39 agents connected to this remote TEMS.
Remarkably, this one situation causes an estimated 73.34% of the total estimated workload. This is an estimate because the data does not include information about situations which started before the Event Status History data. The actual result data can be higher because of real time data requests.
Deep dive Into the report details
Scan or search ahead for Report 999. It is sorted by first node, then situation, then by Time at the TEMS. I will first describe what you see and the guidance from the column description line.
This will show only a single open event and then close event, but there were many listed in the full report.
EVENTREPORT999: Full report sorted by Node/Situation/Time
Situation - Situation Name, which can be different from the Full Name that you see in situation editor, like too long or other cases.
Node - Managed System Name or Agent Name
Thrunode - The managed system that knows how to communicate with the agent, the remote TEMS in simple cases
Agent_Time - The time as recorded at the Agent during TEMA processing. You will see cases where the same Agent time is seen in multiple TEMS seconds because the Agent can produce data faster than then TEMS can process it at times. Simple cases have a last three digits of 999. Other cases will have tie breakers of 000,001,...,998 when a lot of data is being generated. This the UTC [earlier GMT] time at the agent.
TEMS_Time - The time as recorded at the TEMS during processing. This the UTC [earlier GMT] time.
Deltastat - event status. You generally see Y for open and N for close. There are more not recorded here.
Reeval - Sampling interval [re-evaluation] in seconds and 0 means a pure event.
Results - How many results were seen. The simplest cases are 1 and you would see that if you used -allresults control. In this report you only get a warning when there are multiple results.
Atomize - The table/column specification of the value used for Atomize. It can be null meaning not used.
DisplayItem - The value of the atomize in this instance. Atomize is just the [up to] first 128 bytes of another string attribute.
LineNumber - A debugging helper that tells what line of the TSITSTSH data dump supplied this information
PDT - The Predicate or Situation Formula as it is stored.
The Descriptor line - before we see the results.
deb_prccpu_xuxw_aix,deb_gb02cap070debx7:KUX,REMOTE_gbnhham080tmsxm,1180410002104999,1180410001843000,Y,60,1,,,3850,*IF *VALUE Process.CPU_Pct *GE 1.00 *AND *VALUE Process.Process_Command_U *EQ '/opt/BESClient/bin/BESClient' *AND *VALUE Process.CPU_Pct *LT 4.00,
Situation was deb_prccpu_xuxw_aix, agent was deb_gb02cap070debx7:KUX. thrunode was REMOTE_gbnhham080tmsxm. Agent_time was 1180410002104999 and TEMS_time was 1180410001843000, so the agent is running a few minutes ahead. It was an Open event [Y], sampling interval was 600, there was one result, there was no DisplayItem. The record came from line number 3850 in the input. The PDT is shown at the end.
Following the descriptor line is one or more P [Predicate/formula] lines as used as the Agent logic, followed by the results contributing to the TEMS logic.
,,,,,,,P,*PREDICATE=UNIXPS.CPUPERCENT >= 100 AND UNIXPS.UCOMMAND = N'/opt/BESClient/bin/BESClient' AND UNIXPS.CPUPERCENT < 400,
Following the predicate is one or more result lines. These are all in the form of Attribute=value in the Table/Column=raw_data form. There is a leading count of the index of this result line. In this case there was one P line and one result line. Sometimes there are many more but not this time.
Here is where I extracted the value result. This is raw data and represents 1.44%.
I will skip repeating the full details. Next you see the results coming in false and then true. Each time it is true I record the value.
What is the problem and How to fix it?
From this capture, 13.46% of the events and 73.34% of the result workload was from this situation. This from only 39 agents!
Doing that work constitutes a substantial investment. It fails the basic test of a good situation which is to be Rare, Exceptional, and Fixable. It is certainly not a rare condition, seems to be happening all the time. It is happening a lot and no one is "fixing" the condition.
So this situation is not rare, not exceptional and clearly no one is "fixing" it. Therefore the situation should be rethought and reworked until it is rare, exceptional and fixable. If that is not possible, the situation should be stopped and deleted to make room for other useful work at the agent(s) and the TEMS and the event receivers. If it could be stopped, the workload would drop substantially. Thus the situation should be reviewed and justified.
Tale #15 of using Event Audit History to understand and review a high overhead situation and thus potentially save resources.
History and Earlier versions
There are no binary objects associated with this project.