The replay of production workloads in a test environment helps DBAs to identify potential robustness and performance issues related to changes in the DB2 environment. Examples of these changes are:
- Migration or upgrade of the DB2 engine
- Refresh of tables and indexes statistics
- Installation of new security plugins
- Modification of an application server
The test system is usually not identical to the production system. For example, it may have a smaller amount of memory. In order to realistically simulate a workload from the production system on the test system you need to scale the number of users and amount of instance memory, as well as the values of memory-related parameters.
The variables that will be used to formalize scaling are defined in the next section. The article then discusses the scaling of the number of users and instance memory size. Finally, it describes the scaling of the remaining memory-related parameters and automation sample script.
The following seven variables are defined and will be used to formalize the scaling process in the following sections.
- Ip – The size of the
INSTANCE_MEMORYparameter on a production system
- It – The size of the
INSTANCE_MEMORYparameter on a test system
- Up – The number of users on the production system
- Ut – The number of users on the test system
- N – The total number of memory-related parameters
- Mi,p – The size of the i-th memory-related parameter on the production system
- Mi,t – The size of the i-th memory-related parameter on the test system
This section discusses relations between the number of users and instance memory on the production and test systems. In order to ensure consistent behavior, the amount of memory allocated to a given user should be the same on the production and test systems.
If the It is fixed, then the maximum number of users on the test system is given by Ut = It × Up ÷ Ip.
Consider the following example. Suppose that Ip = 100Mb, Up = 10 users, and It = 70Mb. The maximum number of users on the test system is then equal to Ut = 70 × 10 ÷ 100 = 7 users.
If you want to mimic a specific number of users on the test system, then you can calculate the test system’s instance memory size that is needed to achieve this task, using It = Ut × Ip ÷ Up.
For example, if you want to simulate five users on the test system (Ut = 5 users), and Ip = 100Mb and Up=10 users then It = 5 × 100 ÷ 10 = 50Mb.
Once the number of users and instance memory size are set, you can determine the size of the remaining memory configuration parameters.
The remaining memory configuration parameters can be determined using one of the following approaches.
- Constrained approach – This preserves the same ratios of all memory-related configuration parameters’ values on the production and test systems. This approach can be used to mimic a production system with fixed memory-related parameter values, or a production system with uniform workload.
- Relaxed approach – This preserves the same ratios of some memory-related configuration parameters’ values on the production and test systems. This approach can be used to test a production system with memory-related parameters managed by the DB2 engine.
A constrained approach is ideal if all the memory configuration parameters
on the production system are fixed. Preservation of the memory parameters'
values ratios will yield a realistic test system. However, if some of the
parameters are set to
AUTOMATIC and/or managed
by Self-Tuning Memory Management (STMM), then the values of the parameters
may fluctuate as workload changes. This would imply that fixing the values
will not yield a realistic picture. An exception to this rule exists. If
your workload is uniform, then the parameters' values are stable. This
means that even if it is set to
values will likely not change. In this case the constrained approach will
yield good results, even if some parameters are managed automatically.
If some of the memory parameters are managed automatically and are volatile, the relaxed approach may provide a more realistic solution. However, since the size of memory instance on the test and production systems are different, the behavior of automatic memory managers may differ on the systems, hence not always leading to perfect scaling results.
A compromise technique, mixing constrained and relaxed approaches, lies in capturing the sets of parameters’ values under different loads on your production system and then running similar scaled loads on your test system with scaled parameters’ values for each set of parameters.
The rest of the section is structured as follows. First, additional details are provided on implementation of the constrained and relaxed approaches. Then the automation sample script section describes a sample Perl script automating implementation of these approaches.
In order to preserve the same ratios of all memory parameters on the test
system you should disable STMM and set
AUTOMATIC values to fixed values, applying the
same ratios for memory parameters on the test system as on the production
system. The process can be summarized using the following algorithm.
Disable STMM on the test system by executing "UPDATE DB CFG USING self_tuning_mem OFF" For i = 1 .. N Set Mi,t = It × Mi,p ÷ Ip Execute "UPDATE [DB|DBM] CFG USING i-th_config_keyword Mi,t"
The relaxed approach is similar in nature to the constrained approach. However, you will allow the test system to manage memory parameters that were managed automatically on the production system. You need to enable STMM and initialize memory parameters with scaled values. If the values were managed automatically on the production system, then let them be managed automatically on the test system. Otherwise, you should manage them manually.
The process can be summarized as follows.
Enable STMM on the test system by executing "UPDATE DB CFG USING self_tuning_mem ON" For i = 1 .. N Set Mi,t = It × Mi,p ÷ Ip If the i-th parameter was set to AUTOMATIC on the production machine Then Execute "UPDATE [DB|DBM] CFG USING i-th_config_keyword Mi,t AUTOMATIC" Else Execute "UPDATE [DB|DBM] CFG USING i-th_config_keyword Mi,t"
A sample Perl script called
included with this article that automates the constrained and relaxed approaches
algorithms. The script calculates the scaling factor and outputs
"update [DB|DBM] cfg" statements for the test
system to a text file.
Once the data is captured, execute
on any computer, providing the following options.
--approach_type or -a <c|r>
- The type of memory scaling approach: 'c', constrained, or 'r', relaxed.
--db_cfg or -db <file containing db config>
- The name of the file containing database configuration.
--dbm_cfg or -dbm <file containing dbm config>
- The name of the file containing database manager configuration.
--output or -o <output file>
- The name of the output configuration file.
--param or -p <file containing parameters of interest>
- The name of the file containing memory-related configuration parameters of interest being scaled.
--test_inst_mem_sz or -s <test instance memory size>
- The size of the memory available for the DB2 instance on the test system in 4K pages.
--help or -h
- Provides additional documentation.
In order to use the script, you need to store the production system's database and database manager configurations in text files, for example, you could use the following.
$db2 GET DB CFG > db.cfg $db2 GET DBM CFG > dbm.cfg
The file containing memory-related configuration parameters should have the
following format: one parameter (namely a keyword enclosed by parenthesis
in the output of "
get DB | DBM
cfg") per line
in no particular order, as shown in the following example.
instance_memory db_memory mon_heap_sz
This article has described scaling of the number of users, DB2 instance memory, and memory-related configuration parameters on a test DB2 system, based on configuration of a production DB2 system. This information should help you to create a realistic test system that is scaled from production, improving analysis of changes in the test environment.
|Sample Perl script automating the algorithms||memscaling.zip||4KB||HTTP|
- Visit the DB2 V9.7 Information Center to read a summary of DB2
- Read the Self-tuning memory overview in the DB2 V9.7 Information Center.
- Visit the DB2 V9.7 Information Center for a description of the
get DB cfgcommand.
- Visit the DB2 V9.7 Information Center for a description of the
get DBM cfgcommand.
- Participate in the discussion forum.
- Get the resources you need in the Information Management
area on developerWorks, to advance your skills on a wide variety
of IBM Information Management products.
- Learn more about Information Management at
Information Management zone. Find technical documentation, how-to
articles, education, downloads, product information, and more.
- Follow developerWorks on
- Watch developerWorks on-demand demos
ranging from product installation and setup demos for beginners, to
advanced functionality for experienced developers.
Get products and technologies
- Build your next
development project with IBM trial
software, available for download directly from developerWorks, or
spend a few hours in the SOA Sandbox learning how to
implement Service Oriented Architecture efficiently.
- Participate in the discussion forum.
- Check out the developerWorks
blogs and get involved in the developerWorks
Enzo Cialini is a Senior Technical Staff Member in the IBM Information Management Software division at the IBM Toronto Laboratory. He is currently the Chief Quality Assurance Architect responsible for management and technical test strategy for DB2 LUW Engine (Warehouse and OLTP) and Customer Operational Profiling. Enzo joined IBM in 1991 and has been working on the DB2 development team since 1992. He has more than 20 years experience in software development, testing and support. Enzo is actively engaged with customers and the field in OLTP and Warehouse deployments, has authored The High Availability Guide for DB2 and has numerous patents and publications on DB2 and Integration.
Dr. Miranskyy is a Quality Assurance developer in the IBM Information Management division at the IBM Toronto Laboratory. His work and research interests are in the area of mitigating risk in software engineering, focusing on software quality assurance, program comprehension, software requirements, software architecture, and project risk management. Andriy received his Ph.D. in Applied Mathematics at the University of Western Ontario. He has 15 years of software engineering experience in information management and pharmaceutical industries. Andriy has numerous publications and patents in the field of software engineering.