Happy Thanksgiving to everyone. I hope everyone was able to get a good meal and time with family today.
This week I'm writing to you from Seoul, South Korea (it is actually Friday the day AFTER Thanksgiving here yet the Macy's Thanksgiving parade I am watching via Slingbox is still on). I'm working with some colleagues here and doing some mentoring and skills transfer to help broaden the problem determination skills within IBM. Which brings me to today's topic. We encountered a classic application hang. Sometimes, but not all the time, the administrator would restart the application on WAS v8.5 and when the test team started to apply load to the application it would hang. Javacores from kill -3 showed all threads stuck in createOrWaitForConnection. Now for those of you who do follow my blog you probably know about the various techniques I've posted to debug this situation. As we had no access to the developers it was up to us to try and figure out what was causing the hang. Various random twiddling of various AIX OS level parameters didn't work (random changes never do). If they waited long enough the application would sometimes recover and start processing again.
After watching the testing go on for a while I finally suggested we increase the connection pool maximum size to 2n+1 where n = thread pool maximum. The setting the team had set the connection pool maximum was equal to the thread pool max. There was some disbelief that we should go down this path. Any good administrator knows that we want classic funneling where thread pool max is larger than connection pool max to make optimal use of memory, CPU, etc. They re-ran the test and after the 5th attempt realized that we would not recreate the hang. I've posted this command before:
netstat -an |grep ESTA |grep <port#> |wc -l
which gives a connection count to the database on port#. It may be double the value (showing source and destination connections) so you may have to divide the value in half. In our case with thread pool max at 50 and connection pool max set to 101 we were capturing as many as 90 established connections to the database at any one time. Obviously the developers of the application were following the anti-pattern of opening a second connection to the database before closing the first connection which resulted in the deadlock our team in Seoul was observing.
So why wasn't this deadlocking with each and every test? That comes down to randomness. Load tests while they may follow a set process and scripts there is some variability between each test. While it may not vary widely test after test the variability exists in terms of timing on the server. There can be various processes running, or not, at any given point in time. Load on the CPU or tasks the OS is doing can subtly change that timing inducing variability. Timing is key and in some cases the test team got lucky and the test would work. Other times the timing was off and the application would deadlock. This particular anti-pattern is very sensitive to timing. Get the wrong timing and the application will deadlock and hard.
In addition, when they would wait a while the application would recover. This is because underneath the cover of WAS it is quietly reclaiming connections because it knows how long threads have been holding open connections. Once a threshold (timeout) is reached WAS begins the active process of reclaiming connections that have been opened too long. This results in free connections being returned to the pool and the threads that were stuck in createOrWaitForConnection can resume processing.
What is the lesson learned here? When load testing an unknown application it might be worth setting connection pool max to 2n+1 of the thread pool max just to start with and using the command line netstat command (or your application monitoring tools) to see how many connections the application attempts to use. Then once experience is gained with the application reduce the size of the connection pool to something more reasonable based off the observed high water marks in the the connection pool utilization. This is a lot easier tactic than trying to debug an application that is deadlocked in createOrWaitForConnection.
After an application outage or an extremely negative performance event one needs to conduct root cause analysis to try and determine the next corrective course of action. Having done this many times let me document some of the steps done in the first/initial phases of trying to figure out just what happened.
The first task is to inventory what you have, how it is configured and deployed. This includes all software version information, configuration items for the application, pool sizes, etc.
Once that information is gathered understand what may be missing and asking a lot of questions. Is the software at the latest version or fixpack level. If not, why not? Is there anything in the patches subsequent to the version in production that may address the problems encountered? Are there any odd configurations (i.e. JDBC pool size is 3x larger than the thread pool size; 300 second timeouts, etc)? Understand odd configurations and try to determine why they exist. Often this is difficult because the people that initially configured and deployed the environment have moved on to other projects and the team you're dealing with is simply in maintenance mode.
2. Discovery / Data Collection
In order to solve a problem we have to have data about the problem. No data, no resolution because any decision is just a guess. Guesses do not work. My assumption here is we are investigating Java based applications.
a. Were thread dumps collected during the negative event? If not, why not? Thread dumps are collected using 'kill -3 <pid>' (this doesn't "kill" the process it just sends signal #3 to the JVM which is caught by the JVM and it dumps all the Java threads at that point in time) on Unix based systems. Collect thread dumps during all negative events in the future if they were not caught in the past. Thread dumps are a crucial piece of the puzzle to help narrow down what is going wrong.
b. Is verbose GC (garbage collection) enabled? If not, why not? Verbose (and the term is unfortunate as it is not that verbose) GC is another crucial piece of data to understanding what the memory utilization was like during the negative event.
c. If the application was written in house then initiate a code review. Software is written by humans and humans err. It could be a bug in the application that only kicks in during the appropriate planetary alignment event. Reviewing code, on a periodic basis, is a good idea in general even if you are not having any problems.
d. What backends are the applications accessing? Is there any information from the backend that would indicate participating in the negative event (i.e. log files, DB2 snapshots, etc)? It would not be the first time that some negative condition in the backend was causing a front end backlog. It could also be related to bugs in the application (see 2c above).
e. Are any application monitoring tools in place? Java is a robust environment that allows for rather detailed application monitoring of various factors like pool utilization, application response time, SQL response times, etc. Not having an application monitor in place simply limits the ability to understand what happened. Having an application monitor in place also allows for alerts to be issued when a negative event is detected. This allows for proactive actions to be taken by people who can troubleshoot the problem and hopefully fix it before the users ever notice.
f. Look in the application log files. There may be a indication of what is going on in the application logs. This really depends on how well the developers implemented logging in the application and may or may not be of any use. Fingers crossed!
Get through this initial set of steps and then you can go on to the next phase which is actually figuring out just what went wrong. Which I'll write about in my next installment.
Report scheduler enhancements
in Maximo v7.5. As with any online transaction application most enterprises need to pull reports from their environment. Reports tend to be (a) scheduled to repeat and (b) heavy users of CPU and memory. Therefore having more control on the report scheduler is a good thing to look at in Maximo v7.5.
This is the page to follow if there seem to be any Maximo performance or stability problems.
On Solaris on Sparc we're seeing a scenario of high CPU with the majority threads doing work in similar thread stacks (see below) with the top of the stack sometimes in montReduce, squareToLen, multiplyToLen, subN. Obviously the scenario is a number of new TLS connections are incoming but a bug identified as 8153189 causes high CPU. However, on Solaris on Sparc platform it appears there is no fix available even though there is a fix available for Solaris on x86 (however it is not enabled by default you have to use -XX parameters to enable the fix. See earlier link). I am still waiting on Java to confirm the fix status.
This particular scenario is playing out in the tiers between IHS and WAS. The workaround is to minimize the frequency of TLS handshakes by setting the IHS configs to maximize settings so the connections persist and are not destroyed and to reconfigure WAS to have unlimited requests per connection. See addendum at the end of this post:
"WebContainer : 123” daemon prio=3 tid=0x00123456 nid=0xfffa runnable [0x1234568a0]
at com.ibm.crypto.provider.RSACore.a(Unknown Source)
at com.ibm.crypto.provider.RSACore.rsa(Unknown Source)
at com.ibm.crypto.provider.RSACipher.a(Unknown Source)
at com.ibm.crypto.provider.RSACipher.engineDoFinal(Unknown Source)
at javax.crypto.Cipher.doFinal(Unknown Source)
at java.security.AccessController.doPrivileged(Native Method)
Servers > Application servers > $SERVER > Web container settings > Web container transport chains > * > HTTP Inbound Channel > Select "Use persistent (keep-alive) connections" and "Unlimited persistent requests per connection" (and then restart the server)
Modified by polozoff
The popular XC10 appliance has periodic firmware upgrades. However, a recent client was experiencing slower response times from the XC10 running with a recent firmware upgrade. After much looking we saw in the AIX client and XC10 packet traces that the ACK from the AIX client was taking 200ms on almost every response from the XC10. This anomaly can be remedied by setting on AIX
no -o -o tcp_nodelayack=1
The topic gets into interesting TCP/IP conversations about Nagle algorithms, MTU, piggybacking ACKs on data packets and timeouts that I was not aware of but the AIX level TCP/IP configuration change resolved the problem. A similar setting exists for Linux (see techrepublic link below) Note: this can increase the number of packets on the network however in our testing after making the change we did not see that side effect. However, collect the necessary tcpdump/iptrace to trust and verify.
Edit Nov 20 to add some references and interesting related reading
For some reason I had problems downloading the Liberty Profile 9 beta getting 404 not found responses. This link sent to me by a colleague worked. Not sure why but if you're having trouble downloading the beta follow the link.
Modified by polozoff
As applications grow over time they tend to add features and functions and then one day they run out of native heap as more and more Java classes are piled in.
I am promoting this link written by one of my colleages that covers how to use -Xmcrs and setting it to 200M or higher. The fact I have seen this twice in the past month tells me this is a growing (pun intended) problem.
Edit: Oct 30, 2015 and Nov 17, 2015
Native OOM (NOOM) landscape continues to shift. An argument to offset the heap to a different area in the address space is much better (-Xgc:preferredHeapBase). With this new argument, one can place the Java heap allocated past the initial 4g of address space, allowing all native code to use almost all of the lower 4g of space.
The Xmcrs option is not used anymore because of this. Xmcrs is still used. Here is the latest technote that supersedes the link I provided previously. More on troubleshooting out of memory errors.
As part of a troubleshooting exercise we uncovered what appears to be a not commonly known limitations in host names.
"Avoid using the underscore (_) character in machine names. Internet standards dictate that domain names conform to the host name requirements described in Internet Official Protocol Standards RFC 952 and RFC 1123. Domain names must contain only letters (upper or lower case) and digits. Domain names can also contain dash characters ( - ) as long as the dashes are not on the ends of the name. Underscore characters ( _ ) are not supported in the host name. If you have installed WebSphere Application Server on a machine with an underscore character in the machine name, access the machine with its IP address until you rename the machine."
In over 3 decades of IT one of the consistent themes of my job has been transforming under performing business units for various clients. Some of the scenarios I've been called in to transform have been
- retailers suffering from unplanned outages causing revenue loss and potential loss of customer loyalty for their brand
- customer service centers having users calling the call center because the application is not performing at "market speed"
- financial institutions or health organizations having to shut down services due to users seeing other user's data
Those are just three examples of under performing scenarios each of which affects the end user's experience. If those user's are not locked into the brand they could potentially leave and go to a competitor and never come back. The underlying theme in the scenarios needs to be addressed by someone with extensive operational and application development background in information technology. Here are some of the strategies I have used to transform under performers to over achievers.
One of the first things I learned in my career was taking charge. This is not as easy as it sounds. First, it meant having extreme confidence in myself and my decisions. In the early days sometimes I was successful but other times I was not and needed to step back and reconsider what the next steps had to be. What follows are some of my lessons learned with taking charge.
Under performing business units tend to have difficulty making decisions. Quite simply there are too many ways to do any single technical task or effort that solves a problem. When an organization runs by consensus it is even harder to make a decision. I learned early in my career that giving any organization a choice even as simple between effortA and effortB was futile. I have spent countless hours in meetings over the risks and ramifications of the two, or more, choices. I found that providing one solution and only one solution as the most expeditious way to move an organization forward. All the while keeping plan B in the back of my mind in case my first decision hit a road block. However, at this point I have had enough experiences and previous failures to pretty much nail what plan A needs to be and how to execute it.
Have a plan and articulate it
A plan means nothing if the plan makes no sense to anyone else. When I provide the direction to move an organization forward it comes with a step by step plan that addresses
- short term, immediate tactical steps, risks and goals addressing the must haves for the current problem(s)
- intermediate and longer term approach to the would like to have goals
Even the short term, immediate tactical steps may have several iterations of different efforts that can span weeks or months depending on how severe the problem. Inevitably regardless of the scope of the actions requires actions in both operations and application development. Though sometimes I got lucky and it was only one or the other. But not often.
Start with the basics. Have things like the recommended OS level tunings been applied? If not then that is the first part of the plan.
It should go without saying that any plan should be testable outside of the production environment. However, as applications mature and the user base grows so does the operational IT environment. Test as much as possible and keep production changes down to one change per change window with a tested back out plan. Which brings us to the next topic.
Repeatability (AKA scripting)
To minimize risk a robust operational IT infrastructure requires the ability to perform tasks over and over again understanding exactly what the resulting output should be. Whether it is configuring a configuration item or deploying an application we should clearly understand the end result. Scripts developed and tested in test that work can be promoted to production. I'll note here that with the advent of devOps this facet of IT operations has become significantly easier and more robust than in the past. In some of the testing I manage internally for very large scale performance testing of 10,000 Liberty servers in a SoftLayer environment I know that our gradle scripts will build out the environment from scratch the same way each and every time.
Change is a scary word for a lot of people because it also means risk to the business. This is why change processes should be followed meticulously. Having redundancy and lots of it also reduces risk during a change. If redundancy doesn't exist then it needs to be the first part of the transformation plan.
Operational, infrastructural changes versus application fixes
I have always separated operational changes from application fixes in the same change window. Depending on the speed that application fixes can be identified, coded and tested ultimately depends on how quickly application fixes will be introduced. Sometimes it is an easy code fix but other times whole architectures or designs need to be re-worked due to poor decision making. Though the same level of complexity can exist in the IT infrastructure slowing down the speed with which change can be made because developing scripts or testing can take a lot of time. And testing the back out plan can take longer than testing the solution.
Move the organization to be proactive
Under performing organizations typically react to problems. That means that the problem(s) may have been impacting the end user's experience for some time. Application monitoring is key to helping an organization react proactively to problems. I once lead a business unit that was penalized every quarter for server uptime not meeting SLAs to collecting a bonus every quarter for exceeding the SLAs. All by installing and configuring the right tools to allow the organization to identify problems and notify the right people to rectify the problem before the end user ever noticed.
One thing I try to leave with each business unit I've transformed is how to innovate. This is how the business unit goes in to high achiever mode. Mentoring them in how to think differently about problems and the approaches they take in IT. Encouraging wild ducks so to speak.
Transforming under performing business units takes as much leadership as it takes technical prowess. I have found that the more prominent the problem (e.g. complete application outage) the easier it was to troubleshoot and fix. Intermittent issues or glitches like every once in a while our response time goes from 30ms to 560ms tended to be more difficult as capturing data (nonetheless the right data) at the time of the problem can be difficult. But that only means more effort needs to be spent on the application monitoring tools in order to flush out the necessary data.
Modified by polozoff
I'm working on a Liberty server (this is the latest beta I downloaded a couple of days ago) and using the installUtility I'm getting the following error.
# bin/installUtility install adminCenter-1.0
Establishing a connection to the configured repositories...
This process might take several minutes to complete.
CWWKF1219E: The IBM WebSphere Liberty Repository cannot be reached. Verify that your computer has network access and firewalls are configured correctly, then try the action again. If the connection still fails, the repository server might be temporarily unavailable.
I then found out about a command to help try and figure out what is wrong
bin]# ./installUtility find --type=addon --verbose=debug
[6/25/15 10:57:53:093 CDT] Establishing a connection to the configured repositories...
This process might take several minutes to complete.
[6/25/15 10:57:53:125 CDT] Failed to connect to the configured repository:
IBM WebSphere Liberty Repository
[6/25/15 10:57:53:125 CDT] Reason: The connection to the default repository failed with the
following exception: RepositoryBackendIOException: Failed to read
[6/25/15 10:57:53:128 CDT] com.ibm.ws.massive.RepositoryBackendIOException: Failed to read properties file https://public.dhe.ibm.com/ibmdl/export/pub/software/websphere/wasdev/downloads/assetservicelocation.props
Caused by: java.net.SocketException: java.lang.ClassNotFoundException: Cannot find the specified class com.ibm.websphere.ssl.protocol.SSLSocketFactory
Will update when I have more details on why I'm getting the ClassNotFoundException.
and that resolves the issue. A defect has been raised to have the script use the Java we supply instead of the machine's.
[Edited Aug 25 to add
I also needed to update /etc/host.conf to enable hosts file lookup and then add entries for
to /etc/hosts file
The Aug 2015 beta seems to have made a number of fixes to installUtility so if you're on an older beta get the latest.]
A few years ago I blogged about how adding JSESSIONID logging to the access log helps identify which cluster member a user was pinned to. It turns out this also helps troubleshoot another interesting problem.
A WebSphere Application Server administrator noted that the session count on one of their JVMs in the cluster was getting far higher session counts than any other JVM in the cluster. So much so it was like a 3:1 imbalance in total number of sessions in the JVM. We applied the JSESSIONID logging and captured all the session ids. Through various Unix utilities (cut, sort, uniq, etc) we ended up with a prime suspect. One session was calling the /login page 10-20 times per second and had eclipsed every other session by over 10x the number of requests.
Why did we go down this path? We were able to see through the PMI data that the session manager in WebSphere Application Server was invalidating sessions. So we knew it wasn't an issue in the product of not deleting sessions. Also, with one JVM in the cluster creating more sessions than the other JVMs is suspicious. I would have expected to have seen higher load across the cluster. In addition, they have seen the behaviour move around the cluster every few months. That lead me to believe this was like a replay attack. Someone at some point captured a response with a JSESSIONID and was then using that JSESSIONID over and over again until some event caused it to capture a new JSESSIONID (most likely from a failover event as the cluster went through a rolling restart). That behaviour was curious! The fact it was smart enough to realize the HTTP header content changed and adapted was interesting.
So next time you see one or more JVMs with considerably higher session counts than the other JVMs in the same cluster you can use the same troubleshooting methodology to track down who the suspect is. Especially if your application is Internet-facing meaning anyone can start pinging your application.
See this link for the SpecJ performance results that are available.
Modified by polozoff
The WebSphere Application Server Performance Cookbook has been published! I've hinted about this book in previous excerpt postings. Now you can read it in its entirety. Get ready for a long read. The book encapsulates the WAS/Java/Cloud performance knowledge of some of the smartest people in IBM.
A few days ago a few colleagues contacted me about my article on proactive application monitoring. They're building some templates for monitoring applications in the cloud and they had some questions specifically around thresholds for many of the metrics I had listed. For example, one of the questions was around datasource connection pool utilization. Is it reasonable to set thresholds for warnings if the connection pool was 85% utilized and critical if it was 95% utilized? Likewise, similar questions around CPU utilization and would a warning at 75% and critical alerts at 90% be reasonable?
The answer is, (drum roll please) it depends.
No two applications are alike. There are low volume, rarely used applications that may never get above 2% connection pool utilization. Conversely, there are high volume applications where the connection pool can be running at 90-100% utilization. Better metrics to watch (via the PMI metrics) are (a) how many threads had to wait for a connection from the connection pool and (b) how long those threads had to wait. Both of those metrics directly impact the throughput and response time of the application.
Same with CPU utilization. Some organizations like to run their servers hot over 90% utilization because they have spare, passive capacity they can bring online. Others like to run at less than 50% utilization because they want to have spare capacity in an active-active modus operandi.
Setting useful thresholds depends on understanding the organization's Service Level Agreements (SLAs) and the application's Non Functional Requirements (NFRs).
Modified by polozoff
Here is another excerpt from our performance cookbook that will be published in the near future.
Excessive Direct Byte Buffers
Excessive native memory usage by java.nio.DirectByteBuffers is a classic problem with any generational garbage collector such as gencon (which is the default starting in IBM Java 6.26/WAS 8), particularly on 64-bit. DirectByteBuffers (DBBs) (http://docs.oracle.com/javase/6/docs/api/java/nio/ByteBuffer.html) are Java objects that allocate and free native memory. DBBs use a PhantomReference which is essentially a more flexible finalizer and they allow the native memory of the DBB to be freed once there are no longer any live Java references. Finalizers and their ilk are generally not recommended because their cleanup time by the garbage collector is non-deterministic.
This type of problem is particularly bad with generational collectors because the whole purpose of a generational collector is to minimize the collection of the tenured space (ideally never needing to collect it). If a DBB is tenured, because the size of the Java object is very small, it puts little pressure on the tenured heap. Even if the DBB is ready to be garbage collected, the PhantomReference can only become ready during a tenured collection. Here is a description of this problem (which also talks about native classloader objects, but the principle is the same):
If an application relies heavily on short-lived class loaders, and nursery collections can keep up with any other allocated objects, then tenure collections might not happen very frequently. This means that the number of classes and class loaders will continue increasing, which can increase the pressure on native memory... A similar issue can arise with reference objects (for example, subclasses of java.lang.ref.Reference) and objects with finalize() methods. If one of these objects survives long enough to be moved into tenure space before becoming unreachable, it could be a long time before a tenure collection runs and "realizes" that the object is dead. This can become a problem if these objects are holding on to large or scarce native resources. We've dubbed this an "iceberg" object: it takes up a small amount of Java heap, but below the surface lurks a large native resource invisible to the garbage collector. As with real icebergs, the best tactic is to steer clear of the problem wherever possible. Even with one of the other GC policies, there is no guarantee that a finalizable object will be detected as unreachable and have its finalizer run in a timely fashion. If scarce resources are being managed, manually releasing them wherever possible is always the best strategy. (http://www.ibm.com/developerworks/websphere/techjournal/1106_bailey/1106_bailey.html)
Essentially the problem boils down to either:
There are too many DBBs being allocated (or they are too large), and/or
The DBBs are not being cleared up quickly enough.
It is very important to verify that the volume and rate of DBB allocations are expected or optimal. If you would like to determine who is allocating DBBs (problem #1), of what size, and when, you can run a DirectByteBuffer trace. Test the overhead of this trace in a test environment before running in production.
One common cause of excessive DBB allocations is the default WAS WebContainer channelwritetype value of async. In this mode, all writes to servlet response OutputStreams (e.g. static file downloads from the application or servlet/JSP responses) are sent to the network asynchronously. If the network and/or the end-user do not keep up with the rate of network writes, the response bytes are buffered in DBB native memory without limit. Even if the network and end-user do keep up, this behavior may simply create a large volume of DBBs that can build up in the tenured area. You may change channelwritetype to sync to avoid this behavior although servlet performance may suffer, particularly for end-users on WANs.
If you would like to clear up DBBs more often (problem #2), there are a few options:
Specifying MaxDirectMemorySize will force the DBB code to run System.gc() when the sum of outstanding DBB native memory would be more than $bytes. This option may have performance implications. When using this option with IBM Java, ensure that -Xdisableexplicitgc is not used. The optimal value of $bytes should be determined through testing. The larger the value, the more infrequent the System.gcs will be but the longer each tenured collection will be. For example, start with -XX:MaxDirectMemorySize=1024m and gather throughput, response time, and verbosegc garbage collection overhead numbers and compare to a baseline. Double and halve this value and determine which direction is better and then do a binary search for the optimal value.
Explicitly call System.gc. This is generally not recommended. When DBB native memory is freed, the resident process size may not be reduced immediately because small allocations may go onto a malloc free list rather than back to the operating system. So while you may not see an immediate drop in RSS, the free blocks of memory would be available for future allocations so it could help to "stall" the problem. For example, Java Surgery can inject a call to System.gc into a running process: https://www.ibm.com/developerworks/community/groups/service/html/communityview?communityUuid=7d3dc078-131f-404c-8b4d-68b3b9ddd07a
In most cases, something like -XX:MaxDirectMemorySize=1024m (and ensuring -Xdisableexplicitgc is not set) is a reasonable solution to the problem.
A system dump or HPROF dump may be loaded in the IBM Memory Analyzer Tool & the IBM Extensions for Memory Analyzer DirectByteBuffer plugin may be run to show how much of the DBB native memory is available for garbage collection. For example:
=> Sum DirectByteBuffer capacity available for GC: 1875748912 (1.74 GB)
=> Sum DirectByteBuffer capacity not available for GC: 72416640 (69.06 MB)
There is an experimental technique called Java surgery which uses the Java Late Attach API (http://docs.oracle.com/javase/6/docs/technotes/guides/attach/index.html) to inject a JAR into a running process and then execute various diagnostics: https://www.ibm.com/developerworks/community/groups/service/html/communityview?communityUuid=7d3dc078-131f-404c-8b4d-68b3b9ddd07a
This was designed initially for Windows because it does not usually have a simple way of requesting a thread dump like `kill -3` on Linux. Java Surgery has an option with IBM Java to run the com.ibm.jvm.Dump.JavaDump() API to request a thread dump (Oracle Java does not have an equivalent API, although Java Surgery does generally work on Oracle Java):
$ java -jar surgery.jar -pid 16715 -command JavaDump
Another excerpt from our WebSphere Application Server Performance Cookbook, due for external publication sometime in the near future, on determining the health of a JVM. This may or may not look like the final publication.
"A common question is how does one determine how efficiently is the JVM performing and what metrics point to a JVM that is in, or heading toward, distress?
Depending on the environment, number of JVMs, redundancy, continuous availability and/or high availability requirements the threshold for %CPU utilization varies. For HA/CA, business critical environments the threshold can be as low as 50% CPU utilization. For non-critical applications the threshold could be as high as 95%. One needs to analyze both the NFRs and SLAs of the application in order to determine appropriate thresholds to indicate a potential health issue with the JVM.
Amount of times spent in GC
This metric, gleaned from the verbose GC or PMI metrics, is a general indicator of how efficiently the application is utilizing memory and how quickly the garbage collector can complete its tasks. The more time spent in GC the more CPU the application will use and potentially impact the application response time. A general rule of thumb is time spent in GC below 8% is generally a marker of a healthy application environment. If the time spent in GC goes over 8% then it is probably time to either try and tune the JVM or start looking at capacity planning to grow the environment.
%heap utiilization after a full GC
The low water mark after a full GC provides an indication if the heap is able to reclaim memory or not. If the low water mark continues to rise over time after a full GC then the application could be the victim of a memory leak. Heap dumps should be able to identify the culprit and the application can either be corrected to eliminate the leak. Unfortunately, if the application can not be fixed the only way to recover from a memory leak is through a controlled restart of the JVM. In a clustered environment this is not generally a problem if the JVM users can be quiesced to another JVM before restarting the JVM otherwise inflight transactions will be affected when the JVM is stopped abruptly.
Application response time
Deteriorating (i.e. increasing) response time is often an indication of poor health.
Once you have determined that the application is not healthy follow the appropriate MustGather and open a PMR with IBM Support."
I don't normally get involved with User Interface (UI) performance but here is a good article that describes some tips and techniques to make your UI appear snappier. The few times I have been involved with UI performance issues I've used IBM Page Detailer which is a free download that also has some IBM Research papers and links around various performance improvements.
If you do any work around Java Server Faces (JSF) you'll find IBM Page Detailer pretty handy.
I've been involved in some performance testing of various application deployment tools that happen to be open source. The one common theme I have run into are buggy software. Apparently open source tools are following the continuous delivery strategy pushing out new builds and frequently. However, testing seems to be sorely lacking. At least once a week some part of my deployment fails because of a recent update. Something that worked last week quit working this week.
In addition, dependencies appear to be tricky for the open source owner. The number one bug has been the new version has some how messed up the dependencies and refuses to run.
I would really like to see some comments on how other people manage these bugs. I would be pretty upset if my application deployment ground to a halt because of a buggy release.
It is being reported in the news there is another OpenSSL bug around the SSL handshake and being able to force the handshake to use a less secure encryption method. This is interesting because I was troubleshooting a problem a couple of months back in two supposedly identical prod/test environments but test was negotiating tlsv2 but the prod environment was negotiating sslv3. We eventually solved that problem by changing the configuration of the prod environment to use only tlsv2. But curious how this was showing up in the prod environment. May be time to circle back and take a closer look.