Examining performance related issues in WebSphere Commerce
jwbarnes 06000271DW Comments (2) Visits (11111)
Everything seems to be running fine with your WebSphere Commerce installation, orders are flowing, customers are browsing with no problem, things look great. Then it happens, your site is not responsive and you are not sure what the cause of this is. Before you panic, you need to look and see what is going on, then you can look at why it happened.
When you call support it's a good idea to have a basic idea of what happened. Was it a hang, did the process crash, was it high cpu, or did you see an out of memory error? First we will look at how you can determine this, and second we will look at the general troubleshooting steps you can do on your own. It's important to have a general idea of what happened/is happening to your system when starting your troubleshooting, and when thinking of what data to collect. Depending on the situation some data may not provide much information, while leaving out other data can hinder solving the problem.
The first thing you want to look at is how to monitor the process of your system. For most UNIX® type systems this will be something like top or tprof depending. These can be used to see individual threads in a process so that you can see what exactly in the JVM is causing a problem. Now on windows you will run into a problem where the task manager does not give you that kind of information, that is why we recommend using something like TPROF for Windows.
Once you can see what is going on with the CPU and the threads, it is good to examine what is going on. If you see the CPU is high, it's good to collect the top or tprof data and compare that against javacores. It is good to take javacores every so often. This technote entitled "How to map threads from tprof data to javacores" describes how to examine the threads and map them to the hex id that is in the javacore. With that you can see what threads are consuming CPU.
Another piece of the puzzle is to review the verbose garbage collection output. This can show you frequency as well if the heap is about to be used up and how much time your jvm is actually spending on cleaning the heap. This is good to know as when the heap is being cleaned, the jvm is not doing much else. You can download this tool, IBM Pattern Modeling and Analysis Tool for Java™ Garbage Collector and review the output on your own system. For an overview of how to use it please see this webcast replay entitled "How to analyze verbosegc trace with IBM Pattern Modeling and Analysis Tool for IBM Java Garbage Collector".
Now if you notice the heap is filling up and you are running out of room, you will want to review the heap. This can be more challenging depending on the size heap produced. It's a good rule of thumb that you will need to have close to the amount of memory for the tool to process the heap, as the heap took up. Now I use the following tool Memory Analysis Tool, along with the following add-ons: IBM Diagnostic Tool Framework for Java Version 1.10, IBM Extensions for Memory Analyzer. With those two installed it will allow a better view of the heap dumps that are provided. Things to look for is to run the leak suspects tool and have it process what could be filling up the heap, another is to review the heap in the histogram tool and see if there is an abnormal number of objects from various sources. You can view this webinar on how to use it at eclipse.org (external link) Memory Analyzer Project.
Taken all together your system is running fine, what should you do NOW?
With all of these items in place, you should be prepared to troubleshoot performance issues when they arrive, as well as narrow down the code that might be causing the problem.