PowerVM configuration and scaling with WebSphere Portal

This article describes a series of performance benchmarks that demonstrate the scaling properties of an IBM PowerVM® virtual machine running IBM WebSphere® Portal on the IBM AIX® operating system. Various PowerVM configuration settings are used to demonstrate how performance changes when CPU entitlements and virtual CPUs are changed.

Share:

Hunter Presnall (presnall@us.ibm.com), Portal Performance Team Lead, IBM

Photo of Hunter PresnallHunter Presnall is the WebSphere Portal Performance team lead. He has over 14 years of performance testing and analysis experience. He has been with IBM Lotus for five years and has previously worked on IBM Connections and IBM SmartCloud.



Laura Yen (lyen@us.ibm.com), Portal Performance Analyst, IBM  

Laura Yen is a performance engineer on the WebSphere Portal Performance team in Research Triangle Park, North Carolina. She is responsible for all Portal benchmarks on AIX and contributes to the WebSphere Portal tuning guides for each Portal release.



09 May 2013

Also available in Chinese

Summary

This article describes a series of performance benchmarks that demonstrate the scaling properties of a PowerVM virtual machine (VM) running WebSphere Portal on AIX. It briefly describes PowerVM CPU entitlements and how virtual CPUs are allocated. Then, this article documents the WebSphere Portal benchmark environment. Finally, the results show how WebSphere Portal scales when either the CPU entitlement changes or the number of virtual CPUs (vCPUs) changes. Uncapped compared to capped configurations are also measured along with some simple startup time measurements for various configurations.


PowerVM concepts

See the following paragraphs for the main PowerVM concepts.

CPU entitlement

One of the unique PowerVM unique features is the ability to give a VM an exact slice of CPU time. In PowerVM, this slice is called a CPU entitlement. Implicit in this concept is that an entitlement can be for less than one CPU. A VM with fractional entitlements is termed a micropartition. One question this article will answer is if micropartitions scale linearly; that is, is there any overhead running with fractional CPUs?

Virtual CPUs

In addition to specifying the CPU entitlement, a VM must also specify the number of vCPUs it will use. These vCPUs are what the VM operating system sees, regardless of the underlying CPU entitlement.

A VM can have up to ten times the number of vCPUs as its entitlement, rounded down. vCPUs must be specified as whole numbers. For example, a VM with 1 entitlement can have 10 vCPUs; a VM with a 0.25 entitlement can only have 2 vCPUs. In addition, a VM must have at least 1 vCPU for each whole-number entitlement. It is not possible to have 2 entitlements and only 1 vCPU for example.

The interesting question here is whether a multithreaded workload benefits from having PowerVM or the operating system manage threads. PowerVM manages a multithreaded workload by scheduling additional vCPUs for a VM. The AIX operating system handles multiple threads using context switching between its available CPU threads.

Capped compared to uncapped

Another feature of PowerVM is that it allows VMs to use more CPU processing time if needed. If one VM is not using its CPU for a given scheduling interval, other VMs are able to use this CPU time to do more work. In order to use additional processor cycles, the VM must be in uncapped mode, which means that it can use up to its vCPUs worth of entitlement. For example, a VM with a 0.25 entitlement and 2 vCPUs in uncapped mode could use up to 2 CPU entitlements. If the VM only had 1 vCPU, it could only use 1 CPU entitlement. In capped mode, the same VM would never use more than 0.25 entitlement, regardless of its vCPU count. A separate set of benchmarks was run to measure how VMs with 2 vCPUs scale with different entitlements and uncapped settings.

The extra CPUs used by uncapped virtual machines can come from either other shared CPU VMs or from dedicated CPUs VMs that allow processor sharing. For dedicated CPU VMs, this is called donating mode.

Configuration in the HMC

All the above settings are managed in the Hardware Management Console (HMC) under the Manage Profiles item for the VM. After opening up a system profile, all CPU-related settings are in the Processors tab.

In dedicated mode, the VM cannot have fractional entitlements. The number of vCPUs will always be the same as the entitlement.

In Figure 1, the VM is using dedicated CPUs with no donating. It has an entitlement of 2 and will present 2 CPUs to AIX.

Figure 1. HMC configuration for a dedicated CPU VM
Figure 1. HMC configuration for a dedicated CPU VM

In shared mode, both the CPU entitlement and the number of vCPUs must be specified. If the system will be uncapped, that value should be checked too.

In Figure 2, the VM is using shared CPUs. It is set with an entitlement of 0.5, has 2 vCPUs and runs in capped mode.

Figure 2. HMC configuration for a shared CPU VM
Figure 2. HMC configuration for a shared CPU VM

WebSphere Portal benchmark

To answer the questions posed above, a set of measurements will be run on WebSphere Portal 8.0.0.1.

Configuration

For this benchmark, a simple set of test portlets was installed. These portlets make no requests to external databases, files or other systems. Tuning was applied as specified in the Portal 8.0 Tuning Guide to ensure that database access by WebSphere Portal is minimal after warmup. This means that the benchmark will only be limited by the CPU resources of the WebSphere Portal server.

To simulate a realistic customer application, the test portlets were installed on a set of several hundred portal pages in a 2-level hierarchy. Each user can see 26 pages. Users are stored in a separate Lightweight Directory Access Protocol (LDAP) server that is not running on the system under test.

The system under test is an IBM Power Systems™ 730 model with 16 CPUs running at 3.55 GHz. This system has 128 GB of memory. This system runs the following VMs:

  • WebSphere Portal server
    • Various CPU configurations
    • 16 GB of memory
    • WebSphere Portal 8.0.0.1 and IBM HTTP Server
    • AIX 7.1 TL1 SP5
  • WebSphere Portal database server
    • 1 dedicated CPU
    • 8 GB memory
    • IBM DB2® 9.7
    • AIX 7.1 TL1 SP5
  • VIO server
    • 1 dedicated CPU
    • 4 GB memory
    • VIOS 2.2.1.4

Both the WebSphere Portal server and the DB2 server are connected to the same virtual network, run by the Virtual I/O Server (VIOS). Traffic between Portal and the database never leaves the system. The VIOS also exposes a storage area network (SAN) logical unit (LUN) to each VM as a vSCSI disk.

The WebSphere Portal server also runs IBM HTTP Server. This server is configured to cache static content in a memory cache. At steady state, the majority of the requests being made to WebSphere Portal are for dynamic pages, not static content in portlet WAR files.

For all measurements, the same WebSphere Portal VM was used. The configuration was changed by altering the profile in the HMC as detailed above. This was done to ensure that the system being tested was otherwise identical. Note that the tuning done on this system was done to achieve maximum throughput on the largest configuration for this system. Some settings might not be ideal for smaller VM configurations, but this was deemed acceptable given the goals of these benchmarks.

Measurement scenario

The above configuration is used to run a measurement workload with IBM Rational® Performance Tester. This workload consists of five separate paths through the site. One path is unauthenticated and only visits public pages. In paths two and three, the users log in and access either pages. All users can see pages specific to the groups the user is a member of. In the 4th and 5th paths, users access pages that contain "application" portlets that drive more CPU utilization by simulating a shopping cart application and other business processes. These paths are differentiated like paths two and three; one contains applications that all users can see the other only applications applicable to the user's groups.

In Rational Performance Tester, this workload is ramped up to increasing transaction levels by increasing the number of virtual users. Each level is run for 15 minutes to ensure sufficient transactions are measured. The measurement is stopped when the response times for any transaction exceed 1 second, on average for a given throughput level. Peak throughput is recorded as the level before the maximum; that is, the last point when all response times were below thresholds.

The Rational Performance Tester agents were all run on IBM System x® servers running Red Hat Enterprise Linux® 6.3. The agent systems are connected to the system under test by a dedicated gigabit network used only for benchmark traffic.


Capped results

The first set of measurements was done on various capped configurations. This means that the CPU resources given to the VM are static for the duration of each benchmark and each measurement can be compared to the others for scaling purposes.

The configurations tested varied the entitlement from 0.25 to 3. Separately, the number of vCPUs was also varied between 1 and 3, where possible. This generates the following test matrix:

Table 1. Test matrix for capped CPU configurations
Virtual CPUs
1 2 3
CPU entitlement 0.25 X X X
0.5 X X X
0.75 X X X
1 X X X
1.5 X X
2 X X
3 X

Again, it is not possible to give more than a 1-to-1 entitlement-to-vCPU ratio, but vCPU-to-entitlement can be 10 to 1 (rounded down).

Running these tests gives the following results:

Figure 3. Capped CPU results
Figure 3. Capped CPU results

First, looking across the results as a whole, WebSphere Portal scales nearly linearly with respect to the entitlement. This indicates that there is minimal overhead when PowerVM is scheduling fractional CPU resources on micropartitions.

Within each entitlement level however, there is a penalty for allocating more vCPUs. PowerVM assigns each vCPU to a separate real CPU, so the performance degradation when moving from 1 to 2 vCPUs might be due to L2 cache misses as threads move between multiple real CPUs.

However, it is not necessarily better to run with a limited number of vCPUs because it means that the potential for uncapping is limited.


Uncapped results

A second set of measurements was done using 2-CPU VMs. The baseline measurement is a 2 entitlement, dedicated configuration. This was compared to configurations that varied the entitlement while keeping the number of vCPUs fixed to 2.

Because all the configurations run with 2 vCPUs, they can all potentially use 2 real CPUs worth of resources. Thus, they should all have the same throughput.

Running these tests gives the following results:

Figure 4. Uncapped CPU results
Figure 4. Uncapped CPU results

First, note that a dedicated CPU VM and a shared CPU VM have the same performance. This indicates that the PowerVM scheduling overhead difference is minimal when moving to a shared CPU virtual machine.

Similarly, a VM with 1.5 entitlements and uncapped performs as expected. Because there is no other significant processing on the system and a large number of free CPUs, PowerVM can easily give this configuration 2 CPUs worth of processing at peak load.

On the other hand, the 1 entitlement configuration didn't perform as expected. It appears that PowerVM does not give this configuration 2 CPUs even when it could use the resources. This might be due to the fact that when uncapped, the entitlement is effectively doubled. More investigation is needed here but it is recommended that uncapped configurations that greatly increase the effective entitlement are tested prior to deployment.


WebSphere Portal startup

Another interesting item to measure it the time it takes WebSphere Portal to start. This is not a significant issue in production systems where WebSphere Portal is expected to be started only occasionally. However, on development or test systems, which frequently have low entitlements, it is useful to know if WebSphere Portal startup takes significantly longer with less CPU resources.

The time to start WebSphere Portal was measured using the UNIX 'time' command. Startup directly after a VM restart was measured – this is a "cold' startup where there is nothing in the processor caches or the operating system's file caches. After measuring cold startup, WebSphere Portal was stopped then started again. This second time is the "warm" startup time, which should be faster given that the operating system's caches are warmed up.

Running these tests gives the following results:

Figure 5. WebSphere Portal startup time
Figure 5. WebSphere Portal startup time

As expected, WebSphere Portal startup time scales linearly with the CPU resources it is given. Warm startup times are faster than cold startup times, also as expected.

These results indicate that systems that need little total resources on average would still benefit from more resources when WebSphere Portal is starting up. For development and test machines, uncapped allocations are ideal as it means that many of them can be fit in a small processor pool but still get 1 or more CPUs when needed to start WebSphere Portal quickly.


Conclusions

The capacity of WebSphere Portal running on PowerVM scales linearly with the CPU entitlement. Adding more virtual CPUs than required for the entitlement however, has a performance penalty that should be considered if it is not needed.

One consideration is if a VM occasionally needs more than its given CPU entitlement. In that case, adding additional vCPUs and setting the configuration to uncapped will allow the VM to use more resources when needed. Note that this particular benchmark shows that uncapped modes might scale better when the uncapped maximum is not significantly larger than the base entitlement.

Uncapped allocations also benefit WebSphere Portal startup times as they allow smaller VMs to access the higher CPU resources needed to start WebSphere Portal as quickly as possible.

Finally, note that these results presented here are the ideal case because the system tested has many free CPUs and there is no competition for resources. The overhead of switching workloads between shared processors will be greater when the whole system is in use. PowerVM overhead might be higher as well when many VMs are requesting extra CPUs cycles from other idle VMs.


Resources

Comments

developerWorks: Sign in

Required fields are indicated with an asterisk (*).


Need an IBM ID?
Forgot your IBM ID?


Forgot your password?
Change your password

By clicking Submit, you agree to the developerWorks terms of use.

 


The first time you sign into developerWorks, a profile is created for you. Information in your profile (your name, country/region, and company name) is displayed to the public and will accompany any content you post, unless you opt to hide your company name. You may update your IBM account at any time.

All information submitted is secure.

Choose your display name



The first time you sign in to developerWorks, a profile is created for you, so you need to choose a display name. Your display name accompanies the content you post on developerWorks.

Please choose a display name between 3-31 characters. Your display name must be unique in the developerWorks community and should not be your email address for privacy reasons.

Required fields are indicated with an asterisk (*).

(Must be between 3 – 31 characters.)

By clicking Submit, you agree to the developerWorks terms of use.

 


All information submitted is secure.

Dig deeper into AIX and Unix on developerWorks


static.content.url=http://www.ibm.com/developerworks/js/artrating/
SITE_ID=1
Zone=AIX and UNIX
ArticleID=928714
ArticleTitle=PowerVM configuration and scaling with WebSphere Portal
publish-date=05092013