Server Sizing Guide - IBM Rational Quality Manager Version 2.0

This article provides sales teams and customers guidance on how to specify the appropriate deployment architecture for IBM® Rational® Quality Manager Version 2.0. It provides specific results and analysis of performance testing that was executed on Rational Quality Manager V2.0, and describes how the results of those tests provide direction on effective hardware and deployment topologies.

Share:

Chuck Berry (cberry2@us.ibm.com), Advisory Software Engineer, System Verification Test, IBM Rational Software

Author photoChuck Berry has worked in software development and quality engineering for 27 years. He has many years of software testing experience and, over the past 12 years with Rational and IBM, he has tested numerous products, including IBM Rational Robot, LoadTest, IBM Rational Quality Architect, IBM Rational TestManager, IBM Rational Functional Tester, IBM Rational ClearQuest test management, and IBM Rational Quality Manager. He has worked with IBM Rational quality management software for the past six years. Currently, Mr. Berry focuses on testing the performance of Rational Quality Manager.



08 April 2010

Also available in Chinese

Introduction

This article provides sales teams and customers guidance on how to specify the appropriate deployment architecture for IBM® Rational® Quality Manager Version 2.0. It provides specific results and analysis of performance testing that was executed on IBM Rational Quality Manager V2.0, and describes how the results of those tests provide direction on effective hardware and deployment topologies.

This article does not provide an overview of Java™ 2 Platform, Enterprise Edition (J2EE) scaling, nor does it cover tuning parameters to be used in a large-scale deployment.


Disclaimer

Each Rational Quality Manager installation and configuration is unique. The performance data reported in this document are specific to the product software, test configuration, workload, and operating environment that were used. The reported data should not be relied upon, as performance data obtained in other environments or under different conditions may vary significantly.

This document is meant to be a living document and will continue to be regularly refined and updated to reflect the results of our ongoing test efforts, as well as updates and refinements to the product.


Server sizing suggestions

Table 1 details server sizing suggestions extrapolated from the test results provided in this document. The performance of any specific deployment is ultimately dependent on a large number of factors, including (but not limited to) the number of users, workload, number and mix of assets, network topology, tuning of hardware and software, presence of network traffic, and so on. Therefore, these suggestions may not be suitable for any deployment that deviates from the conditions and workloads described in this article. This testing attempts to simulate an actual workload, but no simulation can fully account for the complex behaviors of a team of end-users.

All of the following suggestions assume server-class machines (for example, IBM® System x® or better) with a fast RAID or fiber channel disk subsystem.

Table 1: Server sizing suggestions
Users / Asset loadUp to 200 Users / Less than 100 KUp to 300 Users / Less than 250 KUp to 500 Users / Less than 500 K
Application server
SystemIBM  System x3250 M2IBM  System x3250 M2IBM  System x3250 M2
Total coresAt least 2At least 2At least 4
Total processors111
CPUPentium E5300 Dual Core 2.6 GHz 2 MB CacheXeon E3120 Dual Core 3.16 GHz 6 MB CacheXeon X3330 Quad Core 2.66 GHz 6 MB Cache
Total RAM4 GB4 GB8 GB
Database server
SystemIBM  System x3250 M2IBM  System x3250 M2IBM  System x3250 M2
Total coresAt least 2At least 2At least 2
Total processors111
CPUPentium E5300 Dual Core 2.6 GHz 2 MB CachePentium E5300 Dual Core 2.6 GHz 2 MB CachePentium E5300 Dual Core 2.6 GHz 2 MB Cache
Total RAM4 GB4 GB4GB
Total disk≥ 100 GB≥ 200 GB≥ 400 GB

With the exception of our Rational Quality Manager server, all of the testing in this exercise was performed on 32-bit systems running Microsoft® Windows® Server 2003 Enterprise Edition. Less than 4 GB of memory was utilized on these 16 GB systems. It is also a good practice to isolate data from the application. Therefore, your deployment should separate application and database servers.


Repository size observations

The guidelines in Table 2 following can be used to help determine the disk resources needed by Rational Quality Manager assets. For these tests:

  • The size on disk per 1,000 assets was determined by creating an empty repository on the database server machine, and then using custom data creation scripts to generate 1,000 of each type of asset.
  • The assets generated included fixed sized data (see center column).
  • The database used for this exercise was an IBM® DB2® Version 9.1 database.
Table 2: Asset sizing suggestions
Asset typeSize of data added to each assetTotal size per 1,000 assets
Test plan18 KB39 MB
Test script1 KB12 MB
Test case112 KB40 MB
Machine or virtual machinen/a8 MB
Request2 KB18 MB
Reservation2 KB6 MB
Defect2 KB11 MB
Test environmentn/a7 MB

Rational Quality Manager compresses its assets, so the total size may be less than the sum multiplying each asset data. The amount of compression depends on the compressibility of the data in each asset.


Performance test goals

The goals of the performance testing executed to obtain data for this exercise was to:

  • Determine the level of user loads that could be driven on a standard platform.
  • Collect performance data metrics for a mixture of asset loads.

Asset scalability

The number of assets stored in a repository can affect performance significantly. With a given user load, some test scenarios returned (processed) a larger amount of search results as the number of assets grew.

Table 3 shows the asset mix in the repository at the beginning of each test run. Three different size repositories were used in this test: 50 K assets, 250 K assets and 500 K assets.

Table 3: Asset mix
Item type50 K250 K500 K
Test plan1005001,000
Test script3,00015,00030,000
Test case3,00015,00030,000
Test execution record4,00020,00040,000
Execution result20,000100,000200,000
Lab resources1,2006,00012,000
Test suites3001,5003,000
Request1256251250
Reservation1,6008,00016,000
Work items14,19770,985141,970
Test environment2001,0002,000
Test phases7003,5007,000
Totals:48,422242,110484,220

User scalability

It is important to consider user scalability, while holding the number of assets and configuration constant. The tests were run with different numbers of users, starting with 100 and going up to 500 users. All users of the user load are considered active, meaning that they are all regularly performing operations in the workload above.


Performance test configuration

Rational Quality Manager was tested with 3 different size repositories: 50,000 assets, 250,000 assets and 500,000 assets. Hereafter these repositories will be referred to as: 50K, 250K, and 500K repositories. A varying amount of test runs were run against each repository size to ascertain the behavior of the system under load, and to validate the maximum user load. The hardware used for the application server and database server remained constant for all tests. The test environment consisted of IBM® Rational® Performance Tester Version 8.1 workbench and two agent machines.

To reduce the amount of variability on the test results, these tests were executed on an isolated network environment. Basic tuning was applied to these servers to improve file system, memory, and TCP/IP handling commonly used by Rational performance test teams.

The test configuration also included a Rational Quality Manager Report Authoring server. It is a best practice to install the Report Authoring server on its own machine to eliminate the cost of the report processing from the application server and the database. The performance results in this guide reflect a deployment that has Rational Quality Manager Report Authoring server installed on its own machine. Figure 1 provides the configuration of the test environment used in this test.

Figure 1: Performance test topology
User load generation, application and database tiers

The following tables (Table 4 and Table 5) detail the configuration of the specific machines used in the above topology. The tables list the specific machine used, including:

  • The system model
  • Processor type
  • Number of processors and speed
  • Amount of RAM
  • Specific operating system version (including any service- or fix-pack levels).

Table 4 captures the machine characteristics of the system under test, and Table 5 captures the machine characteristics of the test environment.

Table 4: Machine characteristics of the system under test
Machine roleSystem modelProcessor architectureProcessors (number/speed)MemoryOperating system version
Rational Quality Manager Server
IBM® WebSphere® Application Server V6.1.0.0 FP23
IBM x3550IA32/HT EM64T
2/3.0 GHz (Hyper-Threaded)16 GBWindows 2008 Enterprise Server, x86-64 edition
IBM DB2 Server V9.1 FP4IBM x336IA32/HT2/3.6 GHz4 GBWindows 2003 Enterprise Server SP2
Rational Quality Manager Report Authoring ServerIBM x3550IA32/HT EM64T2/3.0 GHz (Hyper-Threaded)16 GB*Windows 2003 Enterprise Server, SP2
License ServerIBM® ThinkCentre M50IA32/HT EM64T1/2.9 GHz1 GBWindows 2003 Enterprise Server SP1

*Although these machines have 16 GB of memory, Windows 2003 is limited in the amount of memory that it can address (and the Physical Address Extension patch was not used).

Table 5: Machine characteristics of the test environment
Machine roleSystem modelProcessor architectureProcessors (number/speed)MemoryOperating system version
Rational Performance Tester Workbench V8.1IBM x3550IA32/HT EM64T2/3.0 GHz (Hyper-Threaded)16 GB*Windows 2003 Enterprise Server, SP2
Rational Performance Tester Agent V8.1IBM® xSeries® 345IA32/HT2/3.2 GHz4 GBWindows 2003 Enterprise Server SP2
Rational Performance Tester Agent V8.1IBM x3550IA32/HT EM64T2/3.0 GHz (Hyper-Threaded)16 GB*Windows 2003 Enterprise Server SP2

*Although these machines have 16 GB of memory, 32-bit systems running Windows Server 2003 are limited in the amount of memory that can be addressed (the Physical Address Extension patch was not used. See: http://msdn.microsoft.com/en-us/library/aa366778(VS.85).aspx#physical_memory_limits_windows_server_2003 for more information).


User workload simulation

Rational Quality Manager supports a variety of user roles. Therefore, the workload of this exercise simulates a series of role-based scenarios that attempts to mimic simple real-world user tasks. Each user simulated through Rational Performance Tester was assigned a user role based on the set of workloads shown in Table 6 and Table 7. Note that two different workloads were used in this test (with and without report use cases).

Table 6: User simulation with report activities:
User roleUser Percentages of totalRelated transactionsTransaction's Percentage
Quality Engineering Manager8%Create test plan1%
Browse test plans and test cases15%
Search defect10%
View dashboard20%
Run tester report26%
Run Test Case By Creation State report28%
Test Lead15%Modify test environment5%
Test plan editing12%
Test case creation2%
Test script review32%
Test execution21%
Defect search10%
Run Test Case By Date report18%
Tester60%Defect modify5%
Defect search15%
Request lab machine2%
Test execution record browsing15%
Test case editing3%
Test script editing5%
Test script creation2%
Defect creation8%
Test execution45%
Table 7: User simulation without report activities
User roleUser percentages of totalRelated transactionsTransaction's percentage
Quality Engineering Manager8%Create test plan1%
Review test script30%
Browse Test Plans, Test cases26%
Search defect23%
View dashboard20%
Test Lead15%Modify test environment8%
Test plan editing15%
Test case creation5%
Test script review35%
Test execution24%
Defect search13%
Tester60%Defect modify5%
Defect search15%
Request lab machine2%
Test execution record browsing15%
Test case editing3%
Test script editing5%
Test script creation2%
Defect creation8%
Test execution45%
Lab Manager2%Assign request35%
Search machine20%
View reservations45%
Lab Admin10%Reserve lab machine4%
Modify machine10%
Machine search69%
Create lab machine2%
Fulfill request15%
Dashboard Viewer5%View dashboard100%

User workload simulation use case descriptions

The following subsections shown in Table 8 describe specific simulated user workload scenarios used in the benchmark test runs.

Table 8: User transaction descriptions
User roleSequence of operations
Quality Engineering ManagerTest plan create: user creates a test plan, and then adds description, business objectives, test objectives, two test schedules, test estimate quality objectives, and entry and exit criteria.
Review test script: user searches a test script, opens a test script, reviews various sections, and then closes the script.
View dashboard: user logs in, views the dashboard, refreshes the dashboard, and then logs out.
Browse test plans and test cases: user browses assets by selecting View Test Plans, and then configuring View Builder for name search. Open the test plan found, review various sections, and then close. Search for test case by name, open the test case found, review various sections, and then close.
Defect search: user searches for a specific defect by number, user reviews the defect (pause), then closes.
Run tester report: user runs a custom report called Tester Report.
Run Test Case By Creation State report: user runs a custom report called Test Case by Creation State.
Test LeadModify test environment: user lists all test environments, and then selects one of the environments and modifies it.
Test plan editing: user lists all test plans. From the query result, open a test plan for editing, add a test case to the test plan, edit a few other sections of the test plan, and then save the test plan.
Test case creation: user creates a test case by opening the Create Test Case page, enters data for a new test case, and then saves the test case.
Test script review: User searches for a test script by name, opens it, reviews it, and then closes.
Test execution: user selects View Test Execution Records, by originator, selects one of the resultant test execution records from the list, starts execution, enters a pass or fail verdict, reviews results, sets points, and then saves.
Defect search: user searches for a specific defect by number, reviews the defect (pause), and then closes it.
Run Test Case By Date report: user runs a custom report called Test Case By Date State.
TesterDefect modify: user searches for a specific defect by number, modifies it, and then saves it.
Defect search: user searches for a specific defect by number, reviews the defect (pause), and then closes it.
Request lab machine: user enters a request by opening the Create Request page, enters information about the request, and then submits the request.
Test execution record browsing: user browses test execution records by viewing My Test execution records, then selects a sequence of 4 test execution records and opens the most recent results.
Test case editing: user selects My Test Cases. A test case is opened in the editor, and a test script is added to the test case. The test case is then saved.
Test script editing: user selects My Test Scripts, a test script, opens it for editing, modifies it, and then saves it.
Test script creation: user creates a test case by selecting the Create Test Script page, enters data for a new test script, and then saves the test script.
Defect creation: user creates a defect by opening the Create Defect page, enters data for a new defect, and then saves the defect.
Test execution: user selects My test execution records only, then selects a test execution record from list for execution, starts execution, enters a pass or fail verdict, reviews the results, sets points, and then saves.
Lab ManagerAssign request
Machine search: user performs a search for a machine based on its attributes, opens the machine for review, and then closes it.
View reservations: user selects All Reservations, views All Reservations, then types text into a filter field. From the results, the user opens one reservation, reviews it, and then closes it.
Run Requests By State report: user runs a custom report called Requests by State.
Run Requirements Not Covered report: user runs a custom report called Requirements Not Covered.
Lab AdminReserve lab machine: user reserves a lab machine by opening the Find Lab Resource page, selects a machine, and then reserves the machine.
Modify machine: user performs a search for a machine based on its attributes, opens a machine for editing, modifies the machine, and then saves it.
Machine search: user performs a search for a machine based on its attributes, opens it for review, and then closes it.
Create lab machine: user creates a lab machine by opening the Create Machine page, enters information about the machine, and then saves the new machine.
Fulfill request: user performs a search for a request, and modifies the request to fulfill it.
Dashboard ViewerView dashboard: user logs in, views the dashboard, and then logs out. This user provides some login and logout behavior to the workload, which will be controlled in Rational Performance Tester to occur 4 times per hour per user.

Performance execution guidelines

The performance test was designed to add users every seven seconds during a ramp up phase. When the full user load is reached, a looped sequence of actions (to prevent excessive login and logout of users) representing the workload in the User workload simulation tables shown previously (Table 6 and Table 7) was applied.

A dashboard viewer role was added to the workload (not a real product role) so as not to eliminate the effects of login and logout completely. The rate of execution was controlled by the page think time in Rational Performance Tester to create a page rate of approximately 30 pages per hour (pph).

Each performance test consisted of a ramp-up period, followed by a period with a full user load. The full-load period always lasted one hour, regardless of the number of users. All measurements reported in this article come from the full-load one hour period.


Performance test results

Figure 2a depicts the improvement (decrease) in the average response times from Rational Quality Manager V1.0.1 and V2.0 releases with a 50K repository under a 100 user load. Figure 2b shows this same information with a 300 user load. These charts also show that running custom reporting on its own server provides better response than running the built-in reports. Overall, Rational Quality Manager V2.0 shows an improvement of 36% using built-in reports and 88% using custom reporting on a separate server.

Figure 2a: Comparison of Rational Quality Manager V1.0.1 and V2.0 response times with 100 users and a 50K repository (with reports)
Response times of 1.5, 1.1, and 0.8 seconds
Figure 2b: Comparison of Rational Quality Manager V1.0.1 and V2.0 response times with 300 users and a 50K repository (with reports)
Response times of 1.75, 1.25, and 0.75 seconds

Figures 2c and 2d show server response times for the largest data environments, the 250K and 500K repositories, for a variety of user loads (100, 200, 300, and 500, respectively). These tests were run without reporting activities.

Figure 2c: Response times of the 250K repository
587.2, 643.2, 707.0, and 1,310.0 milliseconds
Figure 2d: Response times of the 500K repository
684.9, 744.9, 853.8, and 2,117.4 milliseconds

Table 9, following, shows a number of performance metrics for the different test runs.

All the values in this table present the corresponding value collected during the full user load run time, which presents the server performance for active users.

Table 9: Performance metrics
User load
Performance metrics100200300500
50K Rational Quality Manager asset repository *1
Average response time (ms)793.8748.6770.7
Success rate (%)99.910099.6
Run time [h:m:s]0:59:530:59:590:59:49
Page per hour/user323333
50K Rational Quality Manager asset repository *2
Average response time (ms)1,063.71,016.61,293.1
Success rate (%)99.799.899.8
Run time [h:m:s]0:59:570:59:570:59:57
Page per hour/user333333
250K Rational Quality Manager asset repository
Average response time (ms)578.2643.27071,310
Success rate (%)98.597.39499.9
Run time [h:m:s]0:59:570:59:570:59:550:59.57
Page per hour/user33333233
500K Rational Quality Manager asset repository
Average response time (ms)684.9744.9853.82,117.4
Success rate (%)98.695.594.399.8
Run time [h:m:s]0:59:570:59:570:59:570:59:57
Page per hour/user33333232

Note:

*1: This group included user simulation with custom reports.
*2: This group included user simulation with built-in reports.
Only the 50K tests were run with report activities, all others were not


Server resource utilization levels

Processor and memory utilization were monitored on the Rational Quality Manager application and database servers during the test runs. Figures 3a, 3b, 3c, and 3d show the utilization data for different user loads (100, 200, 300, and 500) and asset levels (50K, 250K, and 500K).

The most interesting data is the processor utilization numbers for the machine running the Rational Quality Manager application. Figure 3c shows the processor utilization for both the Rational Quality Manager application and the IBM DB2 database server with the 500K repository, with one line for each machine type and the x-axis showing varying user loads.

In Figure 3c, the pink line represents the data collected for the test running on the Rational Quality Manager server with a 500K repository:

  • The first point on the line is a value around 10%, which represents the overall average processor usage on the Rational Quality Manager server when 100 active users access the server.
  • The second point, with a value of about 18%, represents the overall average processor usage on the Rational Quality Manager server when 200 active users accessing the server.
  • The third point, with a value of about 28%, represents the average server processor time with 300 active users accessing the server.
  • The fourth point, with a value of about 57%, represents the average server processor time with 500 active users accessing the server.

Because the optimal CPU utilization is no more than 60%, all user loads within this test are acceptable for this hardware configuration. The utilization values for lower asset levels were also acceptable.

The IBM DB2 server is running on a server with less processing power than the IBM Rational Quality Manager application server, and yet at no point does the processor utilization on the IBM DB2 server rise to stressful levels.

Figure 3a: Processor utilization – 50K Rational Quality Manager and database servers
CPU percentage of 8, 12, and 16
Figure 3b: Processor utilization – 250K Rational Quality Manager and database servers
CPU percentages of 7, 15, 22, and 50
Figure 3c: Processor utilization – 500K Rational Quality Manager and database servers
CPU percentages of 10, 18, 28, and 57
Figure 3d: Memory utilization – 50K, 250K, and 500K for Rational Quality Manager server
Server memory utilization between 150 and 450 MB

Legal notices

IBM, the IBM logo, and Rational are trademarks of IBM Corporation in the United States, other countries, or both. Java and all Java-based trademarks are trademarks of Sun Microsystems, Inc. in the United States, other countries, or both. Microsoft and Windows are either a registered trademark or trademark of Microsoft Corporation in the United States and/or other countries. Other company, product, or service names may be trademarks or service marks of others.

© Copyright 2010 IBM Corporation. IBM Corporation, Software Group, Route 100, Somers, NY 10589. Produced in the United States of America. All Rights Reserved. IBM, the IBM logo, and Rational, are trademarks of International Business Machines Corporation in the United States, other countries or both. Other company, product and service names may be trademarks or registered trademarks or service marks of others. References in this publication to IBM products or services do not imply that IBM intends to make them available in all countries in which IBM operates. All statements regarding IBM future direction or intent are subject to change or withdrawal without notice and represent goals and objectives only. ALL INFORMATION IS PROVIDED ON AN AS-IS BASIS, WITHOUT ANY WARRANTY OF ANY KIND. The IBM home page on the Internet can be found at ibm.com.

Resources

Learn

  • Learn about other applications in the IBM Rational Software Delivery Platform, including collaboration tools for parallel development and geographically dispersed teams, plus specialized software for architecture management, asset management, change and release management, integrated requirements management, process and portfolio management, and quality management.
  • Visit the Rational software area on developerWorks for technical resources and best practices for Rational Software Delivery Platform products.
  • Explore Rational computer-based, Web-based, and instructor-led online courses. Hone your skills and learn more about Rational tools with these courses, which range from introductory to advanced. The courses on this catalog are available for purchase through computer-based training or Web-based training. Additionally, some "Getting Started" courses are available free of charge.
  • Subscribe to the IBM developerWorks newsletter, a weekly update on the best of developerWorks tutorials, articles, downloads, community activities, webcasts and events.

Get products and technologies

Discuss

Comments

developerWorks: Sign in

Required fields are indicated with an asterisk (*).


Need an IBM ID?
Forgot your IBM ID?


Forgot your password?
Change your password

By clicking Submit, you agree to the developerWorks terms of use.

 


The first time you sign into developerWorks, a profile is created for you. Information in your profile (your name, country/region, and company name) is displayed to the public and will accompany any content you post, unless you opt to hide your company name. You may update your IBM account at any time.

All information submitted is secure.

Choose your display name



The first time you sign in to developerWorks, a profile is created for you, so you need to choose a display name. Your display name accompanies the content you post on developerWorks.

Please choose a display name between 3-31 characters. Your display name must be unique in the developerWorks community and should not be your email address for privacy reasons.

Required fields are indicated with an asterisk (*).

(Must be between 3 – 31 characters.)

By clicking Submit, you agree to the developerWorks terms of use.

 


All information submitted is secure.

Dig deeper into Rational software on developerWorks


static.content.url=http://www.ibm.com/developerworks/js/artrating/
SITE_ID=1
Zone=Rational, DevOps
ArticleID=480641
ArticleTitle=Server Sizing Guide - IBM Rational Quality Manager Version 2.0
publish-date=04082010