In the last entry we discussed how the TPROF utility that is part of the Performance Inspector package of tools can be used to carry out performance profiling where HealthCenter is not available. HealthCenter has a number of other capabilities, including: lock analysis, garbage collection monitoring, class loading and environment information. This leads to the question: "Are there any other tools I can use on earlier releases of Java where HealthCenter is unavailable?"
Lock analysis for Java 1.4.2: JLM
The answer is yes, at least for the case of lock analysis. The Performance Inspector package also provides a utility called Java Lock Monitor (JLM) which provides exactly the same data that you see in HealthCenter, although without the extra level of analysis that highlights potential or actual problems in locking data.
JLM is available for AIX, Linux, Windows and z/OS where the IBM JRE is being used and provides data on which locks, both Java object locks (synchronized objects and blocks) and system locks (JVM monitors and JNI requested raw monitors), are being used and how often those locks are contended: ie. are a bottleneck in the code execution.
Enabling Java Lock Monitor for a Java application
In order to track the lock usage of the Java VM and the Java application a JProf agent, supplied as part of the package from the Performance Inspector tools site, needs to be loaded into the Java process. This can be done as follows:
For Java 5.0 and earlier:
java -Xrunjprof AppName
For Java 5.0 and later:
java -agentlib:jprof AppName
The JProf agent causes a listener socket in the Java process which waits to be notified by the supplied rtdriver utility to start and stop data collection, and to write out the gathered data to file.
Starting the Java process with the JProf agent causes the following information to be written out the console:
>java -Xrunjprof -Xmx1024M BuyerSim
Path and prefix for log files: .\log
JPROF Version_9.6.1 (JVMPI)
(OnDisk : D:\ibmperf\bin\java\jprof.dll)
(Built : Wed Sep 16 12:13:46 Central Daylight Time 2009)
(Source : Wed Sep 16 12:00:00 Central Daylight Time 2009)
(Platform : Windows-x86, OS=v5.1.2600-Service Pack 2, PerfUtil=v9.5.6, PERFDD=v9.5.5, CPUS=1, HT=0, MC=0, CpuSig=0x00000F07, Variable_TSC, Intel, Hostname=caragua)
(JVM Info : IBM SOV)
JPROF started at 20:46:17 on 2009/10/28
HeapDump disabled, since EnableHeapDump not specified
Server socket (1720) created for PID 4256. Listening on port 8888
Collecting lock data from the Java application
Once the Java application has been started with the JProf agent loaded, the rtdriver utility can then run which connects to the JProf socket on the local machine. This starts a command prompt process which can be used to control the data gathering in the Java process:
** RTDriver for Windows-x86: v2.00.0, 02/05/2009 **
** RTDriver: 20:21:18 connecting to caragua (188.8.131.52) on port 8888 ... OK
** RTDriver: 20:21:18 piClient handshaking completed successfully by host on port 8888 **
20:21:18 Enter command>
Using the jlmstart command will start data collection:
20:21:18 Enter command> jlmstart
And the jlmdump command can be used to write out the data gathered:
20:21:28 Enter command> jlmdump
This causes the Java application process to write out the gathered data to its local directory. It will also write a message to the stdour of the Java process to show that the file has been written:
The jlmdump command can be used several times to get the data at different points. The data gathering can also be stopped using the jlmstop command:
20:23:53 Enter command> jlmstop
Understanding the log-jlm report
The log-jlm report is breaks the locks down into two major sections:
- "System (Registered) Monitors"
- "Java (Inflated) Monitors"
The "System (Registered) Monitors" are the locks from the Java Virtual Machine itself, eg. the Java heap lock ("Heap lock"), and raw monitors requested through calls to RawMonitorCreate:
System (Registered) Monitors
%MISS GETS NONREC SLOW REC TIER2 TIER3 %UTIL AVER_HTM MON-NAME
11 26 9 1 17 0 0 7 83898563 Thread queue lock
7 42468 42468 2941 0 0 0 24 63676 Heap lock
0 18 9 0 9 0 0 7 83896461 Monitor Cache lock
0 12 12 0 0 0 0 1 5241357 Sleep lock
0 8 8 0 0 0 0 1 7858271 IO lock
0 8 8 0 0 0 0 6 81832672 Intern String Table lock
0 8 8 0 0 0 0 0 1618 JIT Code lock
0 8 8 0 0 0 0 7 94381627 Free Class Loader Cache Entry lock
0 8 8 0 0 0 0 6 81834000 System Heap lock
0 8 8 0 0 0 0 7 94384774 Binclass lock
0 8 8 0 0 0 0 7 94375454 JITC CHA lock
0 8 8 0 0 0 0 7 94382865 JNI Pinning lock
0 8 8 0 0 0 0 7 94384139 JNI Global Reference lock
Of the system monitors, the Heap lock is the only one likely to be of interest as contention of the heap lock means that a extremely high rate of object allocation is occurring and this is causing a performance bottleneck.
The "Java (Inflated) Monitors" section lists all of the Java synchronised objects that have had some degree on contention between threads. If these has been no contention then the synchronised object will not be listed:
Java (Inflated) Monitors
%MISS GETS NONREC SLOW REC TIER2 TIER3 %UTIL AVER_HTM MON-NAME
99 854 170 169 684 3185149 109485 0 3358887 [21F70260] java/io/PrintStream@00463C40 (Object)
0 1185233 1185233 284 0 6105043 126952 1 829 [21F995E0] BuyerSim$Cache@004B6E38 (Object)
This shows two synchronised objects which have had any degree of contention: an instance of a PrintStream, and a BuyerSim$Cache object.
The meaning of the columns is provided at the end of the report:
%MISS : 100 * SLOW / NONREC
GETS : Lock Entries
NONREC : Non Recursive Gets
SLOW : Non Recursives that Block
REC : Recursive Gets
TIER2 : SMP: Total try-enter spin loop cnt (middle for 3 tier)
TIER3 : SMP: Total yield spin loop cnt (outer for 3 tier)
%UTIL : 100 * Hold-Time / Total-Time
AVER-HT : Hold-Time / NONREC
What to look for in the log-jlm report
The most important value to look at is the %MISS, which gives you an idea of how often a thread was blocked trying to get a lock because it was already owned by another thread. The higher this %MISS value, the more contention that has occurred on that lock.
Where there is a high %MISS value showing contention, the next value to look at is the AVER_HTM, which is the average time that the lock has been held by a thread. If this value is low then the contention is occurring because many threads are trying to take the same lock, and reducing the number of threads will likely improve the performance. If the value is high then the lock is being held for a long time because a lot of work is being done under the lock, and moving some of the work to outside of the lock will likely improve the performance.
More discussion on reducing lock contention is available in the Java Troubleshooting Guide section on resolving lock contention.
Differences between JLM and HealthCenter
There are very few differences between JLM and HealthCenter in terms of the data presented, as they both gather the data in the same way. HealthCenter however has the added level of helping you to interpret the data, and to alert you to locks which may be causing a performance bottleneck.