Introduction: On-the-fly flexibility
In today's on demand world, flexibility is a key requirement for any platform. IBM® POWER5 and POWER5+ processor-based systems running the IBM AIX 5L™ operating system are equipped with capabilities such as dynamic logical partitioning (DLPAR), IBM Micro-Partitioning technology, and simultaneous multithreading (SMT). These capabilities allow dynamic realignment of platforms, based on enterprise needs. Additional processors can be added on the fly, and the amount of physical memory dedicated to a partition can be changed while the applications keep running. Gone are the days when such configuration changes required a system to be rebooted.
Interestingly, these hardware innovations do not require the software stack to do anything special. First and foremost, there is no negative impact; existing applications continue to run unchanged. For applications that are based on Java™ technologies, the benefit of Java being DLPAR-aware is implicit. You add a processor to a partition, and Java can allocate threads to run on it. When garbage collection (GC) occurs, the parallel phases automatically use the additional processor to speed up the collection process.
But today's enterprise applications need to play an active role in how they take advantage of changes in physical resources, especially as these changes affect their scaling. The concept of low overhead of monitoring APIs has been gaining momentum steadily, and though Java is not yet at a stage where the write-once, run-anywhere approach can be taken, it is now possible to write pure Java classes to query platform-specific features.
This article introduces the Management Dynamic LPAR Extensions that are provided with IBM Software Developer Kit (SDK) for Java 5.0. These extensions allow applications to register themselves in order to be notified when partition resources change. Also, the extensions allow the applications to manipulate the Java heap size, which takes us a step beyond simple monitoring.
This article is targeted toward architects and developers that have either already added virtualization capabilities to their applications or are planning to do so. We start off with a quick recap of how this kind of monitoring could be achieved in the past. Then, the extensions are introduced, followed by a discussion of how and why you might use them in your applications. We hope that the article gives you enough background and reason to start adding these capabilities in your software today.
Background: The evolution of virtualization
This section of the article talks about how virtualization evolved in the AIX® operating environment and how IBM middleware, including the IBM implementation of Java 5.0, has evolved accordingly.
Virtualization in AIX 5L
The IBM Systems System p5™ (formerly pSeries®) line of systems first introduced the server partitioning facility with AIX 5L Version 5.1. Server partitioning allows dividing a system into isolated computing domains called logical partitions (LPARs). AIX 5L Version 5.2 went one step ahead by permitting the addition to or removal from a running partition of processor, memory, and I/O slots. This feature is referred to as DLPAR. The latest IBM AIX release, AIX 5L Version 5.3, further enhances the partitioning feature by allowing partial processor assignment and sharing of I/O devices among partitions.
These capabilities are enabled by a firmware module called the IBM POWER Hypervisor. As a layered architecture, POWER Hypervisor is situated between the operating system and the hardware, essentially presenting a virtual view of the hardware to the operating system. More information on POWER Hypervisor and AIX virtualization can be found in the Resources section of the article.
Software and middleware virtualization
As mentioned before, the hardware innovations are available to the applications without any need for code or configuration changes. Most applications are able to leverage DLPAR resources without any modification. For example, when memory or processors are added to a partition, an operating system scheduler can use this excess capacity to dispatch more threads. But enterprise applications, including middleware, can make intelligent decisions based on the changed configuration. This leads to a true on demand solution; a solution that can adapt to changing needs.
Most components of the IBM enterprise middleware are capable of sensing and reacting to resource changes at run time. As an example, IBM DB2® Information Management software can detect processor and memory alteration and dynamically change query parallelism and bufferpool size, respectively. Similarly, the IBM WebSphere® Extended Deployment product changes the way it balances a load under new resource availability. Details about virtualization capabilities of these products is available in the Resources section of the article.
Java and Java-based applications
Java technology is also part of the IBM middleware stack that can react to resource changes. The simplest example is the addition of a processor to a DLPAR. Java versions 1.4.2 and later launch an additional garbage collection helper thread, so the following GC cycles are faster, as a result of the highly parallel nature of the GC implementation with Java on the IBM System p™ platform. Also, for multithreaded applications, existing threads can immediately be dispatched to the newly added processors.
Although Java itself adapts to the changed resources, until Java 5, it was not straightforward for a Java application to become aware of the changed environment. The options were either to write a signal handler and deal with the notification below the Java layer, or to poll for changes. Though there were several applications that used one or the other mechanism, a clean Java API was sorely missed. Starting with IBM SDK for Java 5.0, new capabilities have been added; they are collectively called the Management Dynamic Logical Partitioning Extensions. As you will soon see, detecting changes in DLPAR configuration is now easier, cleaner, and lots of fun.
Introducing the extensions
Figure 1 shows the Management Dynamic Logical Partitioning Extensions. You can refer to the MXBean and java.lang.management package documentation, as well as the Javadoc for the classes below, to learn more. This article discusses only the information relevant to the DLPAR extensions.
Figure 1. Management Dynamic Logical Partitioning Extensions
Note that not all attributes and operations are shown. Also, Figure 1 uses the letter C in a green circle to denote classes, and the letter I in a blue circle to denote interfaces. Though the figure also shows two Java
*Impl classes, they should not be accessed directly. They are part of the figure only to illustrate the relationship between the
NotificationEmitter classes. The three types of notifications being delivered,
*NotificationInfo, are simple classes and have been omitted for brevity.
As the figure depicts, the extensions can be divided into two categories:
MemoryMXBean. Both of these managed beans encapsulate vendor-specific and platform-specific capabilities, but they are obtained by calling the vendor-neutral API
java.lang.management.ManagementFactory.get*MXBean(). Both the beans also implement the
NotificationEmitter interface, though for our purposes, only the
OperatingSystemMXBean notifications are useful. The example provided with this article,
VirtualizationDemo.java, demonstrates how to instantiate and use these beans.
Let’s take a closer look at what these managed beans, also called
OperatingSystemMXBean and NotificationEmitter
OperatingSystemMXBean provides information about three key attributes of any LPAR: the number of available processors, the processing capacity, and the amount of physical memory. The previous section briefly touched upon the significance of each of these attributes and how they can be dynamically altered.
OperatingSystemMXBean allows users to query the current attribute values.
More significantly, the
NotificationEmitter interface allows a program to be notified when any of the above attributes is changed. This enables a program to tailor its behavior based on the new capabilities.
The advantage of
NotificationEmitter is that the application no longer needs to poll or deal with low-level details in order to be notified. It also becomes quite easy to capture the logic of handling resource changes in a central place. The example program accompanying this article is quite trivial, but it demonstrates one of the techniques that can be used for reacting to resource changes: a
callback can be passed to
MemoryMXBean and Java heap APIs
At first glance, at least one of the attributes discussed before seems to be ineffectual. As Java heap size is defined during startup, how will the changes in the amount of total physical memory be of significance to a Java program? This is where
MemoryMXBean and the new Java heap-size parameters come in.
First, a quick recap of how Java heap size is controlled. Normally, an application can specify one or both of two limits for the Java heap during startup. The first is the
Initial Heap Size, specified with
–Xms. This is the minimum heap size that the Java application will always have. The second parameter is the
Maximum Heap Size, specified with
–Xmx. As the name suggests, this is the upper limit for the heap. Between these two limits, the heap can be expanded or shrunk based on the application heap usage, but heap management is done by the Java Virtual Machine (JVM), not by the application.
Starting with Java 5, the
Maximum Heap Size value (
-Xmx) is now considered a hard limit, one that cannot be exceeded, as it might have been before. However, a new, soft limit is introduced, which can be specified using the command-line parameter
–Xsoftmx. This is akin to the soft and hard
ulimit values used in UNIX® and Linux® operating systems. Like its predecessor
-Xsoftmx places a limit on the size that the Java heap can grow to. However, unlike
–Xmx, the value of
–Xsoftmx can be modified at run time, using
setMaxHeapSize() method provided by
MemoryMXBean. The value of
–Xmx is now the limit that cannot be exceeded by a call to
setMaxHeapSize(), and cannot be changed during the lifetime of the process. The meaning of the startup heap size, specified by
–Xms, remains as before; it is the least size to which Java heap can shrink, and it cannot be modified during process lifetime.
The advantage of such an approach is illustrated through the provided example,
VirtualizationDemo.java. We force the soft limit to remain at one fourth the physical memory available to the partition. For example, if the partition contains 4GB of memory, the soft limit is set to 1GB. If the physical memory now falls to 2GB, the soft limit is changed to 512MB.
Note that if you try to reduce the soft limit, it is taken as a hint by the GC. If the heap is full and cannot be shrunk to the level you want, the hint might be ignored. Similarly, if the soft limit is changed to a higher value, the change is noticeable only if GC heuristics require the heap to be expanded.
Virtualization in the real world
Given these simple and elegant APIs, it is now trivially easy to detect and react to any changes in the LPAR configuration. The next logical question is: Why would you care? This section talks about situations where enterprise applications can benefit from changes to specific resources.
Think processor count, think parallelism. For most applications that use listener threads or object caches, it is possible to tweak the size of the thread pool during startup. With the new APIs, this size can easily become dynamically reconfigurable. Processor count is sometimes referred to as virtual processor. In either case, it is the number of schedulable entities to which AIX 5L can dispatch threads. This is similar to the number of processors in a traditional, nonpartitioned environment.
For example, a Web server can start 100 additional threads per newly detected processor, immediately allowing the server to scale in terms of incoming connections. The system administrator can then monitor the usage of the Web server and divert additional processors to the Web servers when more traffic is detected. This can just as easily be automated, based on the detected traffic patterns or time of day.
Note that the Web server must react to the changed processor count dynamically. If the Web server continues to run with the same number of threads in the pool, the connections themselves are potentially faster, but the connection count is still restricted to the value specified during startup. If you try to specify a large connection count during startup, the server can be overloaded by excessive connections when an insufficient number of processors is available. A well-written Web server scales to more connections when the number of processors increases and scales down when processors are removed from the partition. This approach benefits most applications that use pooling of scalable resources.
The processing capacity dictates how much processing power is at your disposal. For CPU-intensive applications, the processing capacity is used as a hint on prioritizing tasks based on available capacity.
A simple example can be a batch job (or more accurately, a task that can be scheduled to run in the background). Applications that contain both batch jobs and a user interface can decide to suspend batch jobs if the processing capacity falls below a threshold to ensure that the user experience is not adversely affected. The batch jobs and the user interfaces can be controlled by different processes, but a monitoring application can easily automate the scheduling of these processes based on changes to the processing capacity.
The key here is that the application reacts to the changed capacity without manual intervention. You can have two different partitions -- one running the batch job and the other running the user interface -- and through Micro-Partitioning technology, you can decide how much CPU capacity to grant to each of these partitions. With the monitoring capabilities, you now also have the possibility of deploying both tasks (batch job and user interface) on a single partition, and these in turn can react to resource changes in an intelligent manner.
A previous section (MemoryMXBean and Java Heap APIs) discussed how the Java heap size can be altered at run time through the extensions. In the world of static heaps, it is quite common for application testers to do sizing and reach a maximum heap size that guarantees against out of memory exceptions. The applications can also monitor the heap occupancy at run time and can actively stop the growth of the heap (for example, by rejecting new requests) when a threshold is reached. This is a prudent approach; the alternative, of growing until an application crashes, is not appreciated much in the real world, as you are no doubt aware.
In the on demand world, the soft limits replace the maximum heap size for two reasons. One is to ensure that the application never pages the Java heap; if the application starts paging the heap, performance is severely impacted. Two, it allows the application to monitor and guard against unbounded growth by dynamically limiting heap size. This is especially useful during testing, but it can also be added as protection against memory leaks.
Think of a scenario where an application is running with
–Xmx3g (that is, a Java heap with maximum size of 3GB) on a partition that has only 2GB of physical memory. As the Java heap expands, it eventually exceeds 2GB, and subsequent garbage collection cycles cause paging. The performance impact is worsened by the fact that larger heap occupancy results in longer GC times.
Now, contrast this with an application that uses the extensions API to cap the soft limit to 75 percent of available physical memory. Paging is effectively ruled out, and as the application is actively controlling the soft limit, it can also control the size to which it can grow. If an OOM occurs even after the application limits the growth, it clearly points to a memory leak. Note that the OOM would have occurred anyway if a memory leak was present, but given that Java spends a lot of time doing GC just before it eventually gives up and throws an OOM, the last thing you need is paging when an OOM is about to occur.
Walkthrough for the demo
The code provided with this article,
VirtualizationDemo.java (see the Download section), demonstrates how the Management Dynamic LPAR Extensions API works. The code itself is quite trivial; the
showStatus() method in
VirtualizationDemo.java is called, which registers the
callbacks for both the operating system and memory notifications. When either of these notifications are received, the
handleNotification method for the
OSBeanListener class gets invoked. Based on the type of notification, you either print the modified data or take additional action.
When compiled with IBM SDK for Java 5.0 (no arguments are needed for the compilation) and started as shown below, a banner prints the current settings, as shown here:
$ java com.ibm.demo.VirtualizationDemo --------------------------------------------- Available Processors: 8 Processing Capacity: 90 Total Physical Memory: 29.9 GB Java Heap: 4.00 MB (Minimum), 64.00 MB(Soft Limit), 64.00 MB(Maximum) --------------------------------------------- Hit Enter anytime to terminate the demo. >>> On standby for dynamic reconfiguration event ...
Is the information being reported accurately? You can verify that with the AIX 5L
lparstat command, as shown here:
$ lparstat System configuration: type=Shared mode=Uncapped smt=On lcpu=8 mem=30656 psize=2 ent=0.90 %user %sys %wait %idle physc %entc lbusy vcsw phint ----- ---- ----- ----- ----- ----- ------ ---- ----- 0.0 0.0 0.0 100.0 0.00 0.1 0.0 166830695 1252
Available Processors (8) in
VirtualizationDemo output corresponds to
Processing Capacity (90%) is related to
Total Physical Memory (29.9 GB) is equivalent to
mem=30656 in the
lparstat output. Also note that SMT is currently on, as shown by
After printing the banner, the demo prints,
"On standby for dynamic reconfiguration event". But it is important to understand that the program is not polling. The demo simply calls
System.in.read(), which returns for any input on
stdin. Real-world applications do not suspend or poll; the notification happens asynchronously.
Now, let’s examine the demo behavior when the three resource types are altered.
Processor count change
Let’s remove one virtual processor by using the IBM Hardware Management Console (HMC) user interface. The demo prints the following:
... >>> On standby for dynamic reconfiguration event ... ===================================================== === Dynamic Reconfiguration Notification received === ===================================================== Notification Type: com.ibm.management.available.processors.change Time : Tue Mar 28 19:11:37 CST 2006 Sequence Number : 0 New Available Processors: 7 >>> On standby for dynamic reconfiguration event ... ===================================================== === Dynamic Reconfiguration Notification received === ===================================================== Notification Type: com.ibm.management.available.processors.change Time : Tue Mar 28 19:12:11 CST 2006 Sequence Number : 1 New Available Processors: 6 ...
There are two processor change notifications. But you only removed one processor, so why is an additional notification being received? The reason is that SMT is currently switched on. When SMT is enabled, AIX 5L sees two logical processors for each virtual processor. Therefore, when one virtual processor is removed, it translates into a removal of two logical processors. If SMT is off, only one removal notification is raised. Information about the SMT feature is available in the Resources section.
Processing capacity change
Now, let’s add 0.65 (or 65 percent) processing capacity. Processing capacity is also called entitled processor capacity. You'll see the following result:
... >>> On standby for dynamic reconfiguration event ... ===================================================== === Dynamic Reconfiguration Notification received === ===================================================== Notification Type: com.ibm.management.processing.capacity.change Time : Tue Mar 28 19:31:08 CST 2006 Sequence Number : 2 New Processing Capacity: 155% ...
Note that previously, we had 0.90 entitled processor capacity; adding 0.65 makes it 1.55 entitled capacity. You can add or remove entitled processor capacity in increments of .10 of a processor.
Physical memory change
Along with the notification for physical memory change, let’s also examine how the heap soft limit can be modified at run time. To see the new heap features in action, we start the demo with an initial heap of 256MB and a maximum heap size of 3GB. We are not explicitly specifying the soft limit for Java heap, so it will default to the maximum heap size. The command-line parameters, as well as the resulting banner, look as follows:
$ java -ms256m -mx3g com.ibm.demo.VirtualizationDemo --------------------------------------------- Available Processors: 6 Processing Capacity: 110 Total Physical Memory: 29.06 GB Java Heap: 256.00 MB (Minimum), 3.00 GB(Soft Limit), 3.00 GB(Maximum) --------------------------------------------- Hit Enter anytime to terminate the demo. >>> On standby for dynamic reconfiguration event ... ===================================================== === Dynamic Reconfiguration Notification received === ===================================================== Notification Type: com.ibm.management.total.physical.memory.change Time : Wed Apr 19 02:15:10 CDT 2006 Sequence Number : 0 New Physical Memory: 3.06 GB Changing maximum heap size to 784.00 MB Java Heap: 256.00 MB (Minimum), 784.00 MB(Soft Limit), 3.00 GB(Maximum) >>> On standby for dynamic reconfiguration event ...
The partition initially had more than 29GB of physical memory available. While the demo was running, we removed 26GB of physical memory from the partition. The demo not only detected that the physical memory changed, but it reacted to the change by modifying the soft heap size appropriately. The behavior here is explained in detail in the MemoryMXBean and Java Heap APIs section.
This illustrates how easy it is to detect and react to changes in partition configuration. By making simple changes to your application infrastructure, all of these virtualization capabilities can be exploited.
Summary: Virtualization is real
As hardware capabilities increase, applications are becoming smarter in their ability to identify and exploit these new capabilities. Until now, attempts to exploit the platform features required system-level code, but with the introduction of the Management Dynamic LPAR Extensions API, the capabilities can be captured by any Java application.
Virtualization is no longer just a concept. With the support added by IBM in its middleware and platform, the on demand world is very much here. This is the edge that the current generation of applications need against the competition.
We hope that this article has illustrated how easy it is to take your software to the next level.
- These Web sites provide useful references to supplement the information contained in this document:
- IBM System i5 Information Center
- IBM System p5 Information Center
- IBM Publications Center
- IBM Redbooks: Look for the title Advanced POWER Virtualization on IBM p5 Servers: Introduction and Basic Configuration (SG24-7940).
- The IBM 32-bit SDK for AIX, Java 2 Technology Edition, Version 5 User Guide: This document briefly mentions DLPAR support.
(Note: Also see the documentation accompanying the SDK for more information.)
- "DB2 and Dynamic Logical Partitioning" (developerWorks, February 2005): Read this article to learn more about the dynamic behavior of DB2 Version 8.1 and when AIX 5L memory and CPU resources change.
- Read the following white papers:
- The Java Specification Requests (JSR) 174, "Monitoring and Management Specification for the Java Virtual Machine: A specification for APIs for monitoring and management of the JavaTM virtual machine.
- Overview of Monitoring and Management: This site features support for Java development and a description of MXBeans.
- IBM Systems: Want more? The developerWorks IBM Systems zone hosts hundreds of informative articles and introductory, intermediate, and advanced tutorials.
- developerWorks technical events and webcasts: Stay current with developerWorks technical events and webcasts.
Get products and technologies
- Download a free trial version of WebSphere Application Server Version 6.0.
- IBM trial software: Build your next development project with software for download directly from developerWorks.
- Participate in the IBM Systems forums, developerWorks blogs, and get involved in the developerWorks community.
Dig deeper into Java technology on developerWorks
Get samples, articles, product docs, and community resources to help build, deploy, and manage your cloud apps.
Experiment with new directions in software development.
Software development in the cloud. Register today to create a project.
Evaluate IBM software and solutions, and transform challenges into opportunities.