What's new in InfoSphere Workload Replay for DB2 for z/OS v2.1

Improving test coverage by using production workload replays

InfoSphere® Optim™ Workload Replay for DB2® for z/OS® (Workload Replay) extends traditional database test coverage. Now you can capture production workloads and replay them in your test environment without the need to set up a complex client and middle-ware infrastructure. In October 2013, version 2.1 of Workload Replay was released, with key enhancements that we describe in this article.


Patrick Titzler (ptitzler@us.ibm.com), Technical Enablement, Data Server tools, IBM

Photo of Patrick TitzlerPatrick currently leads technical enablement for InfoSphere Workload Replay for DB2 for z/OS and DB2 for Linux, UNIX, and Windows, guides customers in their product evaluation efforts, and supports best practice deployments.

Hassi Norlén (hnorlen@us.ibm.com), Information Developer, IBM

Photo of Hassi NorlénHassi Norlén leads the information development for InfoSphere Workload Replay, and specializes in up and running documentation and user interface development by using the progressive disclosure methodology. Hassi is an information developer with InfoSphere Optim Data Management Solutions and is based in Washington, DC.

19 December 2013

Also available in Chinese


InfoSphere Workload Replay can capture local and remote DB2 for z/OS production workloads plus all the information that is needed to replay them:

  • The original application timing
  • Order of execution
  • Transaction boundaries
  • Isolation levels
  • Other SQL and application characteristics

Built-in reporting tools identify accuracy and performance differences between replayed workloads, which reduces the effort that is required to analyze how changes in the z/OS environment might impact critical workloads.

Figure 1. Capturing and replaying DB2 for z/OS workloads
Diagram of actions to capture and replay DB2 for z/OS workloads

Key themes of this release include improved performance, greater deployment flexibility, and more fine-grained control of the capture and replay process.

64-bit support

Version 2.1 of InfoSphere Workload Replay takes advantage of the Guardium V9 GPU 50 64-bit support, which increases scalability and workload processing capacity. When you deploy InfoSphere Workload Replay for the first time, the 64-bit version is installed automatically. If you plan to upgrade from version 1.1 to version 2.1, you can continue to run on the 32-bit version of Guardium or upgrade to 64-bit.

Export and import workloads

In an enterprise environment, production and test environments are likely isolated from each other for security reasons. Likewise, environments might be geographically distributed with only limited network connectivity between servers. Previously, with a centralized Workload Replay v1.1 server deployment it was not possible or practical to capture and replay workloads in a decentralized environment.

Figure 2. Centralized server deployment cannot be used in decentralized environments
Diagram of managing isolated capture and replay environments

To allow for capturing and replaying the same workload in separated environments, Workload Replay v2.1 introduces workload export and import. You can use this new feature to securely transfer captured workloads between servers in decentralized deployments.

Limited or no network connectivity might exist between the source and the target Workload Replay servers. In those cases, intermediary storage areas temporarily store the workload in external FTP or Secure copy (SCP) servers that are accessible to either the source or the target server. You can move workloads between these storage areas manually by using physical media, such as hard disks or DVDs.

Figure 3. Transferring captured workloads in decentralized deployments
Diagram of transferring captured workloads between isolated environments

Export a workload

Assume that you captured a production workload. You review it by using the capture report and now want to replay the workload in a test environment that is not connected to the production environment. In the Workload Replay web console, the two workloads are displayed.

Figure 4. Listing captured workloads in the source server
Screen capture of a list of captured workloads in the source server

Before a workload can be moved to intermediary storage Workload Replay packages the workload with the required information from the source environment. The user ID that you use to capture the workload must have the Can Export Workload privilege on the source subsystem to invoke the export task in the Workload Replay web console.

Figure 5. Creating a workload file on the source Workload Replay server
Screen capture of creating a workload file on the source Workload Replay server

During workload export the required information is assembled in a self-contained, encrypted workload file. You can then transfer the encrypted file from the source Workload Replay server to any accessible FTP or SCP server.

Figure 6. Identifying workloads to upload to external storage
Screen capture of how to identify workloads that to upload to external storage

Use the upload workload file CLI command to upload an encrypted workload file to an FTP or SCP server.

Figure 7. Uploading a workload file to an FTP server from the source
Screen capture of how to upload a workload file to an FTP server from the source in the CLI console

When the workload file is uploaded successfully to intermediary storage, remove the encrypted file from the source Workload Replay server by deleting the exported workload in the web console.

Figure 8. Removing an exported workload file from the source Workload Replay server
Screen capture of how to remove an exported workload file on the source Workload Replay server

When network connectivity is nonexistent or slow between the target server and the intermediary storage, move the workload file to an FTP or SCP server that the target Workload Replay server can access.

Import a workload

To import a captured workload into a target Workload Replay server, use the CLI console to download the encrypted workload file to the Workload Replay server, and then import the workload.

Issue the download workload file command in the CLI console and provide the required information. Include the connectivity information for the FTP or SCP server where the workload file is located and the workload file name.

Figure 9. Downloading a workload file from an FTP server onto the target server
Screen capture of how to download a workload file from an FTP server

The download operation stores the encrypted workload file in an internal working directory on the target Workload Replay server.

You can display a list of imported workload files with the show workload files CLI command.

Figure 10. Identifying workload files to import
Screen capture of how to identify workload files to import from the CLI console

Import the workload file to the target server with the Import task in the SQL Workloads tab of the target’s Workload Replay web console. You can give the imported workload a different name, for example if a workload with the original name exists on the target server.

Figure 11. Importing a workload file
Screen capture of how to import a workload file

The import operation decrypts and unpacks the encrypted workload file and associates the workload with a database connection. The user ID that you use to capture the workload must have the Can Import Workload privilege for the associated subsystem to import workloads.

Figure 12. Reviewing workload tasks for an imported workload
Screen capture of how to review workload tasks for an imported workload

After a captured workload is successfully imported, create a capture report or transform the imported workload in preparation for replaying the workload on the target Workload Replay server.

The workload file remains on the server after the workload was imported. To free up storage on the target server, you can delete the downloaded workload file with the delete workload file CLI command.

Figure 13. Deleting an imported workload file
Screen capture of how to delete an imported workload file from the CLI console

Consider using the export and import feature to temporarily or permanently archive captured workloads in a secure location outside the Workload Replay server. This way, you can establish a workload library that can be shared as needed.

Capture LOB and XML data

As a workload is captured, the Workload Replay server collects capture information about host variables and parameters that are used in each executed SQL statement or stored procedure call. If the application manipulates large object (LOB) or XML data, large amounts of data are streamed by S-TAP to the Workload Replay server and then stored there.

In certain application scenarios, the return code and result count (rows that are returned or modified) of the SQL statement's (or stored procedure calls) might not depend on the actual input LOB or XML values. Since the Workload Replay report analysis does not compare data values in result sets, you might choose to not capture the actual input values and instead use generated dummy data. Dummy data eliminates the need to transfer and store the original LOB and XML data on the server.

Let’s look at two examples.

  1. An application periodically inserts 0.5 KB CLOB data that contains item descriptions. The DB2 return code for the INSERT operation is identical, irrespective of whether the CLOB reads "Feel the incredibly soft touch…" or "123456789012345678901234567890…". If the results of subsequent SQL executions do not depend on whether something is incredibly soft or just a generated value, capture the length information of the description and reduce the amount of data to stream, store, and process.
  2. Another application might call a stored procedure that processes an incoming XML document and depending on the content return one or more rows. The stored procedures business logic might return different result set sizes if you call the stored procedure with generated data. In this case, replaying the workload with generated data might be problematic because workload replay results are potentially impacted and the Workload Replay comparison reports are not accurate.

Generally speaking you must determine whether the following criteria apply to your environment:

  • Is the accuracy of the replay results impacted if generated LOB or XML data is used during replay?
  • Does the replay environment need to contain the exact same data as the source environment after a workload is replayed?
  • Can the existing (hardware and network) infrastructure support the following?
    • Timely streaming of large amounts of data between the capture environment and the Workload Replay server (during capture) and between the Workload Replay server and the target environment (during replay).
    • Storage of large amounts of data on the Workload Replay server.

If use of generated LOB and XML data is unacceptable during workload replay, configure capture to record actual data values (up to a maximum of 32 KB as only length information is collected beyond that value).

Figure 14. Defining whether XML and LOB data is captured
Screen capture of how to define whether XML and LOB data is captured

LOB and XML data values are currently not displayed in the capture, transform, or comparison reports.

Filter captured SQL when replaying workloads

While a workload replay is in progress, extraneous SQL activity can occur on shared subsystems. For example, a performance monitor might run in the background or other applications might execute. Version 2.1 of InfoSphere Workload Replay introduces filters for you to limit the SQL execution information that is collected during workload replay.

If you anticipate SQL activity on the replay system that is irrelevant to your evaluation, consider defining a replay capture filter.

Because filters do not restrict the SQL that executes while a replay is in progress, this extraneous SQL might still impact the behavior (and performance) of the workload that is replayed.

If no filters are defined (or you only use less restrictive filters than during workload capture), new SQL might show in the workload comparison report. Likewise, if more restrictive filters are defined for workload replay, the comparison report might report SQL as missing.

Define replay capture filters in the replay wizard of the Workload Replay web console.

Figure 15. Reviewing workload replay options
Screen capture of reviewing workload replay options

You can retrieve the original capture filter and modify it as necessary to account for differences in your replay environment.

Figure 16. Defining replay capture filters
Screen capture of defining replay capture filters

Keep in mind that some filters incur a higher S-TAP usage. For example, schema level filters require more parsing of the SQL statements to determine whether they are referencing the specified database object or not.

Map static SQL collection IDs when you transform captured workloads

If your workload takes advantage of static SQL, the associated packages (also called application packages) must be present in your source environment. Starting with version 2.1, InfoSphere Workload Replay supports collection mapping to allow for accurate workload replay of statically executed SQL in environments where packages are in different collections.

You can map captured collection information in the workload transformation wizard of the Workload Replay web console.

Figure 17. Reviewing workload transformation options
Reviewing workload transformation options

When you map collection IDs, keep in mind that the replay user ID (or user IDs) must be authorized to execute packages in the mapped collection.

Figure 18. Defining collection ID mappings
Screen capture of defining collection ID mappings

For more information about these new features in InfoSphere Workload Replay for DB2 for z/OS, see the New features and enhancements document that is listed in Resources.


The key features in version 2.1 of InfoSphere Workload Replay for DB2 on z/OS are a blend of performance improvements, SQL capture and replay enhancements, and support for decentralized server deployments. With these features, you can use an even broader range of workloads to quickly analyze in a pre-production environment what impact any changes in your DB2 for z/OS environment might have on applications that run in production.



Get products and technologies

  • Evaluate IBM products in the way that suits you best: Download a product trial, try a product online, use a product in a cloud environment.



developerWorks: Sign in

Required fields are indicated with an asterisk (*).

Need an IBM ID?
Forgot your IBM ID?

Forgot your password?
Change your password

By clicking Submit, you agree to the developerWorks terms of use.


The first time you sign into developerWorks, a profile is created for you. Information in your profile (your name, country/region, and company name) is displayed to the public and will accompany any content you post, unless you opt to hide your company name. You may update your IBM account at any time.

All information submitted is secure.

Choose your display name

The first time you sign in to developerWorks, a profile is created for you, so you need to choose a display name. Your display name accompanies the content you post on developerWorks.

Please choose a display name between 3-31 characters. Your display name must be unique in the developerWorks community and should not be your email address for privacy reasons.

Required fields are indicated with an asterisk (*).

(Must be between 3 – 31 characters.)

By clicking Submit, you agree to the developerWorks terms of use.


All information submitted is secure.

Dig deeper into Information management on developerWorks

Zone=Information Management
ArticleTitle=What's new in InfoSphere Workload Replay for DB2 for z/OS v2.1