How to Use a Tracing Profiler on zLinux
kgibm 0600027VAP Visits (1516)
There are two broad categories of profilers: stat
In our case, we were looking for a tracing profiler on zLinux. We settled on the Rational Agent Controller (RAC), which has a really broad set of supported operating systems: AIX, Linux, Linux s/390 (zLinux), Windows, Solaris, and z/OS. Once you've got the agent controller installed and the JVM instrumented, you can either gather data in headless mode which you load into Rational Application Developer, or start/pause monitoring remotely from RAD.
The RAC comes with a JVMTI profiling agent which has to be attached to the JVM. This profiler has a lot of native components which makes this a bit tricky. First, you'll need to add a generic JVM argument, such as:
Note that the argument has to be specified with double quotes to avoid any issues with the semicolon in the Linux launcher. So if you already had some arguments, such as -Xgcpolicy:gencon, then your final generic JVM arguments would be:
Next, we need to tell Linux how to load native library dependencies for libJ
Name = LD_LIBRARY_PATH
Value = /opt
WAS is smart enough to append the library path you specify to the library path that it needs itself.
In the example above, we use the server=controlled option, which means that the JVM will not start until RAD connects to it. The reason we did this was so that we can control what gets profiled, since we weren't interested in profiling JVM startup. This option is recommended over server=enabled for high volume profiling. Here are the basic steps we followed:
There is also the option of using server=standalone which writes the profiling data to a local file and avoids the RAC itself and needing to connect in remotely from RAD. However, I tried this with a vanilla WAS install, and startup took really long and created about 15GB of data. It would have been cumbersome to analyze.
I won't cover all the analysis options from within RAD, as this article does a great job; however, I will highlight a particular approach we took as we were interested in what exactly was taking up the CPU (tracing profilers can also be used to look at usage from a cumulative time perspective, which will capture I/O bottlenecks, etc.).
First, we captured top -b -H -d 1800 -p $PID to gather accumulated CPU time per thread. We captured this at the start of profiling and at the end and took the difference to find the threads that accumulated CPU and sorted by that number. Next, within RAD's Execution Time Analysis, select the Call Tree tab and find these threads. Expand the threads and follow down the largest paths of cumulative time. Note that there may be some rows with very large cumulative times that are probably just the frames of the thread that are "waiting for work," such as a call to getTask or await, and these can be disregarded.
Once you find a high level method of interest (the art of profiling!), right click it and select Show Method Invocation Details. In the third table, "Selected Method Invokes," sort by Cumulative CPU Time, descending (if you don't have this column, you will need to make sure you have this option selected in one of the RAD attach/profiling screens when starting to profile). This will give the accumulated CPU time from a high level. You can then "drill down" further if you'd like to by doing the same procedure with rows from this table.
Note: It appears that this cumulative CPU time in the method invocation details is for the whole tracing profile, not from within the context of the call tree thread stack that you get here from.