Contents


Best practices for Java and IBM WebSphere Application Server (WAS) on IBM POWER9

An IBM Cloud Private case study

Comments

In this article, we discuss some of the best practices to achieve best performance from applications running in the Liberty profile of IBM® WebSphere® Application Server (WAS) on IBM Power® System S9xx and L922 systems (based on the recent IBM POWER9™ processor technology). These best practices should be applicable to most Java™ applications, even if running outside WAS. We use the deployment of a particular application, Acme Air, running in IBM Cloud Private as a case study to demonstrate the benefits and the application of the best practices. For completeness and to reflect the growing importance of IBM Cloud Private, we also discuss some techniques we used to tune the IBM Cloud Private environment for POWER9.

Applications that run on optimized hardware can effectively utilize the underlying resources of the system. As an example, microservice-based applications based on WebSphere Liberty, which is included as part of IBM Cloud Private, provided 1.86 times per core performance, 43% lower solution costs, and 1.66 times better price-performance on IBM Power L922 compared to Intel® Xeon® Gold 6130.

The use of the microservices in cloud-native applications has several benefits, specially running on an optimized hardware. We will demonstrate the benefits of running microservices through several techniques on the POWER9 hardware. We will also discuss the best practices that were employed when running a Java microservices application on POWER9. These techniques can help in getting the maximum performance out of Java microservices running on IBM Cloud Private.

General best practices for WAS Liberty and Java

This section provides a set of application performance guidelines for Java- and Liberty-based workloads. We will refer to Acme Air as a case study in their usage.

Tuning core SMT level for performance

In general, transactional Java-based applications typically benefit from multiple threads to produce higher throughput. The Acme Air workload is no exception because it is an online flight reservation system capable of providing lots of web API calls (measure of transactions/throughput) per day.

Each processor core in the Power S9xx and L922 servers supports up to eight simultaneous multithreading (SMT) threads in hardware (AC922 cores have up to four threads). On Linux, each such SMT thread is represented as a virtual processor, whereas on IBM AIX®, each is represented as a logical processor. In other words, each POWER9 processor core supports running up to eight vCPUs on Linux or eight logical processors on AIX. Regardless of the terminology, core SMT level is an option that customers can choose during system boot or change dynamically in an active system. Obviously, the total number of virtual or logical processors in a system or partition depends on the core SMT level chosen. The following table shows the relationships:

Core SMT level (mode) Number of SMT threads Number of vCPUs or logical CPUs
ST (Single thread) 1 1
SMT2 2 2
SMT4 4 4
SMT8 8 8

For example, a system with four processor cores running in the SMT4 mode will have 16 virtual or logical processors while one with the same number of processor cores running in the SMT8 mode will have 32 virtual or logical processors.

Large workloads using many threads on many-core systems face extra challenges with respect to concurrency and scaling. In such cases, steps can be taken to decrease contention on shared resources and reduce overhead. However, for Java workloads on POWER9, we strongly recommend SMT8 mode as the default for running the system. We have evaluated a substantial set of workloads to identify the performance benefit going from the SMT4 mode to the SMT8 mode on the system. The following table shows some of them:

Workload/Name SMT8 performance difference over baseline SMT4
SPECjbb2015 max-jOPS 24.5%
SPECjbb2015 critical-jOPS 37.6%
DayTrader7 throughput 35%

More details of DayTrader7 workload throughput on a six-core partition of Power S924 is shown in Figure 1.

Figure 1. Daytrader7 performance

Most applications do benefit from SMT, but some applications do not scale with an increased number of vCPUs or logical CPUs on an SMT enabled system. One way to address such an application scalability issue is to change to a lower SMT mode with fewer vCPUs or logical CPUs. This usually can be done dynamically without rebooting, and there are generally few concerns with regard to possible bad performance effect due to the system topological change.

On the other hand, when the SMT level (mode) is increased, we strongly recommend doing it over a maintenance window or reboot, and don't recommend doing it dynamically, unless you are aware of the following few possible side effects and necessary care has been taken to address them:

  • Certain level of Linux kernels cannot deal with these increased number of vCPUs gracefully. We have observed kernel crashes (rare though), or the newly added vCPUs are actually not used.
  • Ubuntu cgroup hotplug issue can result in the situation where these newly added vCPUs are not used by containers or the KVM guests.
  • AIX doesn't number the increased logical CPUs contiguously based on CPU cores. For example, core 0 had logical CPUs 0-3 in the SMT4 mode. When it changed from SMT4 to SMT8, core 0 has logical CPUs 0-3 and 64-67, instead of the usually expected 0-7. This has subtle performance implications if you are using a resource set or binding to a specific set of logical CPUs.
  • If there was a change in the number of memory pools used, the distribution of different types of pages might become unbalanced among the pools. This can possibly lead to performance degradation.

Transparent huge pages

You can enable large page support on systems that support it by starting Java with the -Xlp option. On certain processors, the JVM now starts with large pages enabled by default. Large page usage is primarily intended to provide performance improvements to applications that allocate a great deal of memory and frequently access that memory. The large page performance improvements are a result of the reduced number of misses in the translation lookaside buffer (TLB). The TLB maps a larger virtual storage area range and thus causes this improvement. Large page support must be available in the kernel, and enabled, so that Java can use large pages.

Refer to Configuring large page memory allocation for more details about it.

Another option is to run with transparent huge pages (THP) enabled in the kernel, and in this case, the kernel uses large pages even though the user has not configured large pages explicitly. Transparent huge pages generally improve the throughput performance of Java applications, but it is possible that for certain non-Java applications the feature is not beneficial, or even harmful in terms of performance. Because this kernel feature can only be enabled system-wide, care must be taken to set the transparent huge pages switch to "madvise" instead of enabling unconditionally (using "always" for the huge pages switch) for all applications on the system. This allows the kernel to selectively use transparent huge pages only in beneficial cases.

Figure 2 shows the amount of page faults without large pages and with large pages (only using it in Liberty with the -Xlp option) enabled. There is a big drop in the amount of the page faults when large pages are enabled on POWER9.

Figure 2. Page faults

Liberty thread pool tuning

One of the many consequences of running on the cloud is that there are usually a greater number of layers involved when compared with running on a bare metal machine. The layers could refer to the virtualization of different resources (processor, storage, and so on), or they could simply be that completing a task may involve more network hops across a large farm of cloud machines, because the location of the machine having the resource (say, the database) is more uncertain than it is in a more controlled on-premises environment. Regardless, the presence of more layers invariably leads to performance overheads and so the latency associated with each task could be significantly higher when running on the cloud. Depending on application design, higher latency environments can require many more application threads to fully use the available CPU resources, as threads may spend time blocked on remote task execution.

Starting in Liberty 18.0.0.1, the default thread pool autonomics were enhanced to be more highly performant in cloud (high latency) scenarios thereby removing the need to manually tune the Liberty thread pool settings. You can find many more details of these changes for better out-of-the-box performance in cloud scenarios at: https://developer.ibm.com/wasdev/docs/was-liberty-threading-and-why-you-probably-dont-need-to-tune-it/.

If customers have a significant variety in their workloads and deployment environments, the latencies experienced by those workloads in different deployment environments could vary quite a lot, making it a challenge to tune manually in all cases. This is the main reason that we recommend that customers use the default Liberty thread pool autonomics as it is designed to adapt transparently in all cases, ensuring that the customer does not suffer from sub-optimal performance because their manual tuning that worked well in one case did not work well in other cases. Note that if a customer's objective is to get every last percentage of performance and is willing to invest the time to manually tune each deployment, their results will likely be at least as good (and possibly even better) as those achieved by the Liberty thread pool autonomics. But we consider the cost of tuning to be significant enough to be impractical for most customers and in those cases, it would be better to rely on the Liberty thread pool autonomics to get nearly optimal performance in all deployments at minimal operational cost.

Note that the Liberty thread pool size is unrelated to the number of garbage collection (GC) threads employed by JVM. That is, because GC is a stop-the-world process, JVM will apply as many threads as it considers optimal (usually the number of logical CPUs) to parallelize GC activities and shorten the GC pause duration.

IBM Cloud Private application details

The IBM Cloud Private infrastructure shown in Table 1 is configured for running the Acme Air workload.

Table 1. IBM Cloud Private infrastructure
Node type OS/Kernel Hardware
Management RHEL-7.3 (Maipo)
3.10.0-514.el7.ppc64le
16 cores
IBM, 8286-42A
at 3.8 GHz
Master/Proxy RHEL-7.3 (Maipo)
3.10.0-514.el7.ppc64le
20 cores
PowerNV 8001-22C
at 3.4 GHz
Worker 2
POWER9 PowerVM LPAR
3.10.0-693.el7.ppc64le 8 cores
IBM, 9008-22L
at 2.8 GHz
Worker 3
Skylake-KVM guest
3.10.0-693.21.1.el7.x86_64 16 cores
Intel Xeon Gold 6140
at 2.3 GHz

We used the following software stack to deploy the microservices:

  • WebSphere Application Server 2018.2.0.0 (wlp-1.0.20.20180208-0700)
  • java.version = 1.8.0_161- java.runtime = Java SE Runtime Environment (8.0.5.10 - pxl6480sr5fp10-20180214_01(SR5 FP10))
  • OS = Linux (3.10.0-693.el7.ppc64le; ppc64le) (en_US)

At a high-level view, the workload is made up of five Java-based microservices and three MongoDB databases. These services are based on WAS Liberty Docker images. Figure 1 shows a view of the microservices interactions within the IBM Cloud Private environment where we used JMeter as a workload driver, which sends transactions through a proxy server part of the IBM Cloud Private cluster. To saturate the CPU on the worker node, the microservices were scaled up to a total of 20 JVM instances (four JVM instances per microservice).

Figure 3. Logic of Acme Air

You can find more details about the Acme Air workload at:
https://github.com/blueperf

Tunings for the IBM Cloud Private application

The following sections describe the tunings we used for the IBM Cloud Private application.

Network adapter tuning

The mpstat reports captured during the workload showed that all of the inbound traffic that was coming through a single interrupt request queue (IRQ) was being routed to a single CPU which ran it up to 100% utilization. Figure 4 and Figure 5 show the single CPU core spending 99% on SoftIRQ related to the interrupt of the network adapter used for data communication. This confirms that NIC is dumping all the interrupts in one receive (RX) queue, instead of distributing them across the available RXs.

Figure 4. mpstat report
Figure 5. nmon report

The NIC is handling all the interrupts in a single RX queue, and whichever CPU happens to be the one serving that queue gets overwhelmed (99% busy). We changed the adapter's slot to a different PCI slot so that it would connect directly to a CPU socket rather than being switched through an intermediate device in case the issue with the RX queue was somehow related to the intervening device. Though that change is beneficial for network communication in general, it did not resolve the higher SoftIRQ issue. To resolve the issue, we needed to manually map all cores (0-63) to be available to all the interrupts. For example, if the adapter has eight queues available and the first queue is defined as /proc/irq/72/smp_affinity_list to a single core, use the following command to allow the queue to use all the cores:
echo 0-64 > /proc/irq/72/smp_affinity_list

This change allowed us to distribute the SoftIRQ load across all available CPU cores.

Figure 6 shows mpstat after the change.

Figure 6. IRQ affinity

Hardware prefetcher

The Data Stream Control Register (DSCR) is used to control the degree of aggressiveness of memory prefetching for load and store. Generally, for sequential data access pattern, the hardware-based memory prefetching can improve performance by reducing the impact of cache miss latency and will prefetch data cache lines from L2, L3, and main memory into the L1 data cache for quick access. Performance of Acme Air improves slightly (1-2%) when the hardware prefetch is turned off. The Acme Air data access can be considered random and we believe this is the reason why the hardware prefetcher does not improve performance. This is in fact fairly typical of Java applications and so it is worth checking if the hardware prefetcher should be enabled or not on a case-by-case basis (to ensure that cases where the prefetcher helps do not get missed).

CPU frequency

For performance measurements, we expect to use higher CPU frequency, and because the Acme Air workload is considered as a CPU-bound workload, we set the frequency at performance level through the Advanced System Management (ASM). Figure 7 shows the setting used.

Figure 7. CPU frequency option

To confirm that we are using the highest CPU frequency during the workload, we used the following command from the ASM interface:
"getclockspeed pu.ex pu_coreclock mhz -all"
p9n.ex k0:n0:s0:p00:c1 3137

JDK settings

We enabled the verbose GC logs and collected the logs for each run along with the server logs. The option, -Xverbosegclog:./logs/verbosegc.%seq.log,20,10000, in the Liberty JVM arguments to make the verbose GC log appear in the same location as the other server logs. The verbose GC showed that the booking service JVM is spending more than 15% of the processor time doing GC, during a test period.

That GC log showed that the level of the GC activity was excessive, and that led us to try running with a higher heap on all the Liberty instances. This reduced the CPU utilization to be used for more work to be done. By default, each JVM starts a number of GC threads equal to the number of hardware threads, and when a JVM does a GC operation, all of its GC threads will be busy. This might work well in a small system with a lower CPU count. But, if there are many more JVMs running in the single OS, then GC thread contention may occur. Ideally, the total number of GC threads should not exceed more than four times the number of hardware threads. With eight cores at SMT8 and 20 JVM, we set the GC threads to be 8.

The following JVM options were used during the runtime:

-Xmx2g
-Xmn1792m
-Xgcthreads8
-Xlp
-Dhttp.keepalive=true
-Dhttp.maxConnections=700
-Dcom.acmeair.client.CustomerClient/mp-rest/url=http://nginx1/customer
-Dcom.acmeair.client.FlightClient/mp-rest/url=http://nginx1/flight
-Xverbosegclog:./logs/verbosegc.%seq.log,20,10000

Ingress controller

The default settings of the ingress controller in IBM Cloud Private would work for most of the development-related activities, however, for a workload with high network traffic, we need to adjust the default settings. The ingress controller of IBM Cloud Private is based on NGINX. Refer to:
https://www.nginx.com/blog/introducing-nginx-kubernetes-ingress-controller/

In order to handle JMeter traffic, we need to allow higher worker connection and increased timeout values for the proxy server of IBM Cloud Private using the ingress controller. Figure 8 shows the settings that were added.

Figure 8. Ingress controller
{
  "apiVersion": "v1",
  "kind": "ConfigMap",
  "metadata": {
    "name": "nginx-load-balancer-conf",
    "namespace": "kube-system",
    "resourceVersion": "5261",
    "annotations": {
      "kubectl.kubernetes.io/last-applied-configuration": "{\"apiVersion\":\"v1\",\"data\":
      {\"disable-access-log\":\"true\"},\"kind\":\"ConfigMap\",\"metadata\":{\"annotations\":{},
      \"name\":\"nginx-load-balancer-conf\",\"namespace\":\"kube-system\"}}\n"
    }
  },
  "data": {
    "body-size": "0",
    "disable-access-log": "true",
    "keepalive_requests": "10000",
    "max-worker-connections": "163840",
    "upstream-keepalive-connections": "1000",
    "worker-processes": "10",
    "worker_rlimit_nofile": "81920"
  }
}

When deploying the Acme Air microservices within IBM Cloud Private, we also need to deploy ingress services that are used for this application. Refer to the following URL for ingress service configuration: https://www.ibm.com/support/knowledgecenter/en/SS5PWC/front_end_config_cfc_task.html

Because the ingress service exposes the application to the external requests, we need to change its configuration to handle more requests. The following settings were used:

ingress.kubernetes.io/rewrite-target: /
ingress.kubernetes.io/ssl-redirect: "false"
ingress.kubernetes.io/connection-proxy-header

Conclusion

In this article, we set out some general best practices for getting the most out of your WAS or Java application running on the new POWER9 Enterprise systems. We used these to get great performance out of an application running on IBM Cloud Private, as well as a number of best practices that are specific to this environment. As the data we present here and elsewhere shows, the combination of IBM Cloud Private, WAS Liberty, and POWER9 systems offers great performance with attractive price-points and impressive density.

Related topics


Downloadable resources


Comments

Sign in or register to add and subscribe to comments.

static.content.url=http://www.ibm.com/developerworks/js/artrating/
SITE_ID=1
Zone=Linux
ArticleID=1060291
ArticleTitle=Best practices for Java and IBM WebSphere Application Server (WAS) on IBM POWER9
publish-date=04202018