Over the past few weeks, we've published several articles and Performance Perspectives columns describing how we at Lotus have been incorporating Rational testing and analysis tools into our development and test environments. These tools (such as IBM Rational Performance Tester) provide us with a new set of technologies to ensure the quality and scalability of all our products. We now use IBM Rational Performance Tester to develop load and scalability workloads that simulate end users executing focused activity in many of Lotus's product offerings.
For those of you unfamiliar with Rational tools, IBM Rational Performance Tester consists of several integrated components, each with a specific purpose in the performance testing workflow:
Figure 1. IBM Rational Performance Tester and components
These components perform the following:
- Rational Robot provides script recording and editing.
- Rational TestManager provides test scheduling, test reporting, and test management. Â
- Rational Test Agent provides concurrent execution of test scripts on multiple Windows and UNIX machines.
With Rational Robot, we can record client-server protocol interactions in a script in Robotâs C-like Virtual User (VU) language. This script, when executed, appears to the servers under test as a real user using the actual client application. In TestManager, we can define and execute a workload (or suite in Rational terminology) to run multiple concurrent virtual testers, each running one or more test scripts recorded previously in Robot.
One of the major challenges in developing workloads is planning and utilizing test scripts. These are the programs that simulate a user running the application under test. The scalability of the test script often determines the size and complexity of the user workload we can run in our tests. Also, the same test script may scale differently depending on the platform on which it's running. IBM Rational Performance Tester allows us to run workloads on different platforms because its Rational Test Agent component can distribute test scripts to Windows 2000, AIX, Linux, HP-UX, and Solaris machines for execution. (Note, however, that TestManager, which controls overall workload execution and reporting, can only run on Windows 2000, Windows XP, and Windows NT.)
This article describes a study we performed to compare the scalability of Robot test scripts on Windows 2000 and Linux, running identical hardware, software, and workload (suite) definition. The application we tested was IBM Lotus QuickPlace, Lotus's Web tool for team collaboration. In this study, we found that Robot test scripts scaled significantly better on Linux, both in terms of memory usage per simulated user and in total number of virtual users possible with a given configuration.
This article assumes that you have some familiarity performing system analysis with Server.Load, IBM Rational Performance Tester, or another enterprise-level load and scalability tool. QuickPlace experience will also be helpful.
When developing our QuickPlace workload, we focused on CPU-intensive activities. Our primary concern wasn't with trying to make our test environment as close as possible to an actual customer environment. Instead, we wanted some indication of how the two different platforms (Windows 2000 and Linux) affected the scalability of our test scripts. Therefore, the data we derived (discussed later in this column) may or may not correspond with results seen in a real QuickPlace environment.
Our study required three separate computers: one local computer to run the TestManager console, one to run a Test Agent instance on Windows 2000, and one to run a Test Agent instance on Linux. All three machines were IBM xSeries computers with one 1.2 GHz CPU, 1.3 GB RAM, and 36 GB hard disk. Two of these machines were running Windows 2000 with Service Pack 3. The third ran Red Hat Linux 7.2. On both systems, we tweaked configuration settings to optimize performance, as instructed by the Rational documentation. Other than these optimizations, we used default settings throughout our study.
Our workload (or suite in Rational terminology) simulated a group of users performing the following tasks:
- Log in
- "Sleep" for 30 minutes (this allowed us to log in a large number of users)
- Open My Places
- View each page on My Place until the last page appears (the number of pages varied from 9 to 100 for a total of 1,250)
- Log out
As you can see, this is a focused, "stressful" workload, not intended to simulate typical QuickPlace user actions.
We conducted our tests with the same number of users running on both the Windows and Linux machines. After all simulated users had been running long enough to ensure a stable load on our test scripts, we used perfmon for Windows and top for Linux to collect the following metrics. (Top is a free command line tool used for displaying ongoing system activity. One useful feature of top is its ability to see the memory usage of individual processes. You can also use the command ps -elf to capture memory statistics.)
- Memory usage per user process
Each user (âvirtual testerâ in Rational terminology) ran as a separate process named rtvuser, enabling easy identification by perfmon or top. To be able to plan our hardware usage, we need to determine the resources used per each virtual user. This also helps us determine how many virtual users we can run on one test script (see the following).
- Maximum number of virtual users supported
We calculated this number by monitoring total CPU utilization, starting with 100 users per workload and increasing by increments of 100 until CPU utilization approached 100 percent for that platform.
Note that throughout our tests, the QuickPlace server itself was not a performance bottleneckâit was not constrained by resources nor produced errors. The following sections describe our results.
When determining memory per user, both perfmon and top measure what is referred to as the process image. This consists of both the non-shared (private) memory dedicated to the user process, plus the user process's portion of shared memory (which remains relatively level across user populations). However, to get an accurate indication of how memory usage compares between the Windows 2000 and Linux platforms, we focused on private memory usage. To calculate private memory usage, we measured total physical memory (and for comparison virtual memory) consumed for significant numbers of virtual testers:
|Number of virtual users||Total virtual memory allocated (MB)||Total physical memory allocated (MB)|
We then extrapolated these numbers to determine the incremental memory consumed per additional virtual user for Linux and Windows 2000, as shown in the following graphic:
Figure 2. Windows 2000 vs Linux: Memory usage per virtual user
As you can see, our test script running on Linux consumed significantly less memory per user than when running on Windows 2000. The average simulated user on Windows 2000 required 3 MB of memory, while the identical user running on Linux needed less than half this amount, approximately 1.3 MB.
Again, bear in mind these numbers represent private memory only and do not include shared memory. Therefore. if you conduct your own study using perfmon and/or top, your results may be significantly different if you do not extract shared memory from your numbers.
We also compared the maximum number of QuickPlace users we could run on Linux and Windows 2000 before reaching memory saturation:
Figure 3. Windows 2000 vs Linux: Maximum number of users
Our test script on Linux could support approximately 2,000 simulated users, while on Windows 2000 memory usage reached maximum at 547 users. In fact, after 270 users, all activity was paged on the hard drive, significantly impacting performance.
The following table summarizes our findings:
|Memory per user||3 MB||1.3 MB|
|Maximum number of users||547 (above 270 all user activity required paging)||2,000|
|Recommended number of users||270||600 to 1,000|
Based on our observations, it appears that Linux can support considerably more simulated QuickPlace users than Windows 2000. This conforms with earlier, "anecdotal" feedback we had gathered indicating Linux scales better than Windows 2000 in certain applications. Therefore, we conclude that Linux is the preferred platform for Rational Test Agent where large numbers of simulated users are required.
Of course, these numbers only represent one study done with virtual users on one specific hardware configuration. They do not necessarily indicate exactly how these platforms would perform in a real QuickPlace environment. However, we feel confident our results are valid for testing purposes and can be useful to others planning to create IBM Rational Performance Tester workloads for QuickPlace scalability analysis.
For more information on using Rational Robot for load and scalability evaluation, see the article, "Using Rational Suite TestStudio to analyze a Domino application."
The article "Creating a Lotus Workplace Messaging test scenario with Rational Suite TestStudio" also provides additional information on this topic.
Participate in developerWorks
blogs and get involved in the developerWorks community.
Yuriy Veytsman has been a Staff Software Engineer with IBM/Lotus since the late 1990's, working on projects involving Domino Web Access (iNotes), Discovery Server, Lotus Team Workplace (QuickPlace), and Lotus Instant Messaging (Sametime) testing, and various other responsibilities. Previously, Yuriy was employed developing a variety of software and hardware applications for numerous companies throughout Europe.