IBM® WebSphere® Service Registry and Repository (hereafter called WSRR) lets you differentiate different runtime environments within an organization, and maintain web services and metadata appropriately for those different target runtime environments. For example, one registry can be used when building and developing web services, another when testing those web services internally,
and a third when the services are deployed to production and used by customers. The registry in which services are first discovered, developed, and governed is called the governance registry.
The registry to which the final production version of the service is deployed is called the runtime registry, and any interim registries used for testing are termed
To benefit from this article, you should have a working knowledge of WSRR. If you would like more general information about WSRR before continuing, see the WSRR information portal or the WebSphere Service Registry and Repository developer resources page.
The mechanism that WSRR provides for this propagation of service data and metadata from a governance registry to different target runtime registries is termed promotion, and is a well-established feature of WSRR. Data is always promoted from the governance registry to each runtime registry, and not from runtime registry to runtime registry, as shown below:
Figure 1. Promotion overview
A topology for governance, staging, and production registries is entirely configurable within your WSRR configuration profile. The style most appropriate will depend on your particular environment. The Governance Enablement Profile (GEP) provides a reference profile that you can adapt as needed. For more information on target registry environments, see the IBM developerWorks article WebSphere Service Registry and Repository topologies explained.
Synchronous and manual promotion
WSRR provides two different options for delivering promoted data to a target registry:
Synchronous promotion means that WSRR automatically populates the target runtime registry. In this case, once the promotion for the target object has been triggered, the runtime is populated with the object and all other objects related to it. If there is any error during the promotion process, then the target object will not be transitioned to the promoted state.
Manual promotion involves triggering the promotion of a target object, which causes a zip file to be created on the system hosting the governance registry. This file contains details of the promoted object and all others objects in a relationship to it. The location and name of this zip file is configurable in the promotion properties. As soon as this zip file has been created, the target object on the Governance registry will be transitioned to the promoted state. However, the promotion on the target runtime will not be completed until the zip file has been manually copied to the runtime registry and then imported through the WSRR UI.
For more information on how to use the GEP to promote a simple service, see Tutorials for the governance enablement profile (GEP) in the WSRR information center.
Preparing for WSRR promotion
When loading WSDL and XML schemas into WSRR, various logical objects will be derived from them, including Port Types, Service Bindings, and Service Interfaces. A number of additional correlated objects are also created in this process and used to group together various logical objects. For example, the ServicePort correlated object connects the WSDLPort to the name, namespace, and version of the service from which the port is derived.
You can see that it is not the size of the WSDL file being loaded that determines how complex its promotion will be, but rather the number of objects that depend on the objects being promoted. For example, when a simple WSDL is loaded, it may generate 60 dependant objects, whilst a very complex one might generate over 40,000.
To promote a WSDL you should manage it with a Service Version object, as described in Registering the WSDL for a service in the WSRR information center. The Service Version encapsulates all of the information relating to a service within a governed collection, which represents a single defined version of the service. The collection associated with the Service Version includes information such as the Service Level Definition and any Service Level Agreements.
A Service Version object can be managed through various life cycle states defined in your profile. Promotion is triggered by transitioning the Service Version to a specific life cycle state that has been identified in the WSRR promotion properties configuration. For example, this promotion point may be the
The Promotion properties configuration specifies details of the promotion, such as whether it will be manual or synchronous, and, if it is synchronous, which system the data will be promoted to. When the promotion is complete, the target runtime registry will be populated with the various objects managed by the Service Version, as shown below:
Figure 2. Promotion process
Promotion performance considerations
Along with migration, promotion is one of the most resource-intensive WSRR operations, and it can place significant loads on:
- WSRR governance and runtime servers
- WSRR governance and runtime databases
- The network between WSRR and its database servers
It you expect to promote or re-promote data often, then each of these areas should be as efficient as possible. Optimising them from the point of view of promotion will also improve overall WSRR performance. The rest of this article covers these three areas.
WSRR governance and runtime servers
A number of factors can improve promotion performance on the WSRR governance and runtime servers. Some of these apply to the governance registry, some to the runtime registry, and some to both. The sections below describe each of these factors.
Optimized synchronous and manual promotion (governance server)
Prior to WSRR V8.0, synchronous and manual promotion used a temporary database during both the export and import phases of promotion. Starting with WSRR V8.0, this temporary database has been removed from the synchronous promotion process, making it much more efficient. This improved efficiency is also provided as an option to manual promotion starting with WSRR V126.96.36.199.
This new technique has dramatically improved promotion times. For example, one test showed an 85% improvement in synchronous promotion time. Actual performance improvements will vary depending on the nature of the WSRR data. Starting with WSRR V8.0, this new optimized synchronous promotion replaces the temporary database transfer. The optimized manual promotion, on the other hand, is delivered through Fix Packs subsequent to V8.0, and therefore does not replace the default use of the temporary database during manual promotion, but instead makes it available as an option.
The optimized synchronous and manual promotions are available on earlier releases of WSRR through product updates. A summary of the rollout for these earlier releases is shown in Figure 4 below. For these earlier releases, optimized promotion does not become the default after the product is upgraded, but can be specified with a new promotion type option in the promotion properties configuration. For synchronous promotion, the option is sync-optimized, and for manual promotion, it is manual-optimized. Here is an example of how to specify the sync-optimized promotion in the promotion properties configuration:
Specifying sync-optimized promotion in the promotion properties configuration
<environments> <environment name= "http://www.ibm.com/xmlns/prod/serviceregistry/6/1/GovernanceProfileTaxonomy#Staging"> <promotion> <type>sync-optimized</type> </promotion> <servers> <server name="fred.bloggs.com" port="2809"/> </servers> <security enabled="true"> <wsrrUser>aWsrrUser</wsrrUser> <wsrrPassword>(DES)xxxxxxx==</wsrrPassword> </security> </environment> . . . </environments>
To revert to the original promotion method in these updated versions of V6.3 to V7.5, change the promotion type back to sync. Note that:
- In V8, synchronous promotion is optimized regardless of whether the promotion type is specified as
- In V188.8.131.52, manual promotion defaults to Derby database based promotion, but you can change it to optimized promotion by specifying the promotion type as
manual-optimizedin the promotion properties.
- The sync-optimized option is not available in WSRR V6.2 or earlier, and there are currently no plans to provide it.
- The manual-optimized option is not available in WSRR V6.3 or earlier, and there are currently no plans to provide it.
Minimal synchronous and manual promotion (governance server)
In addition to optimized promotion, another new promotion type called minimal promotion has been introduced in WSRR V184.108.40.206. Minimal promotion can be much more efficient when a governed collection that is being promoted contains supporting objects, which are objects related to an object in the collection being promoted but not actually a member of that collection. During a minimal promotion, a supporting object is marked as belonging to a governed collection so its governance state cannot be changed, and it will not be incorporated into other governed collections that relate to it on the runtime system. Figure 3 shows how the xsd supporting object has been promoted, but the rest of the governed collection that it belongs to has not:
Figure 3. Minimal promotion
If a supporting object's original governed collection happens to be promoted at a later date, the existing object on the runtime will be automatically incorporated back into its governed collection. When its governed collection is promoted, the marked object is simply replaced by the incoming version and it just goes back to being a normal governed object.
When using WSRR V220.127.116.11, if minimal promotion is specified, the promotion is also optimized, because the old promotion style using a temporary database was removed starting with WSRR V8.0. For V18.104.22.168, here are the manual and synchronous promotion options:
- sync -- Optimized promotion with no temporary database. In V8, it can also be specified as sync-optimized
- sync-minimal or sync-optimized minimal -- Optimized promotion which has also been minimized
- manual -- Manual promotion that uses a temporary database
- manual-minimal -- Manual promotion using a temporary database which has also been minimized
- manual-optimized -- Optimized manual promotion
- manual-optimized-minimal -- Optimized manual promotion that has also been minimized
The minimal synchronous and manual promotions are also being made available on earlier releases of WSRR through product updates. A detailed summary of the rollout for these earlier releases is shown in Figure 4 below.
For WSRR V7.0 and 7.5 with the relevant fix pack applied, the synchronous promotion types listed above are slightly different, since they still retain the original promotion option using a temporary database:
- sync -- Synchronous promotion using a temporary database
- sync-minimal -- Synchronous promotion using a temporary database that has also been minimized
- sync-optimized -- Synchronous promotion with no temporary database (much faster than sync)
- sync-optimized-minimal -- Synchronous promotion with no temporary database that has also been minimized
The benefits of minimal promotion depend on the nature of the dependencies to separate governed collections.
Optimized and minimal promotion rollout
This table summarizes the rollout of optimized and minimal promotion, and what is available in which version of WSRR:
Figure 4. Optimized and minimal promotion rollout
Optimized and minimal promotion option recommendations
Optimized promotion will enhance the performance of all promotions and you are therefore strongly recommended to use it wherever possible. For WSRR V8, the recommended promotion types are therefore sync and manual-optimized. You may also choose to minimize these by specifying the promotion type as sync-minimal or manual-optimized-minimal.
For WSRR V7.0 and V7.5, the recommended promotion types are sync-optimized and manual-optimized. You can also minimize these specifying the promotion type as sync-optimized-minimal or manual-optimized-minimal.
The enhancements afforded by minimal promotion will offer enhancements according to the dependencies within the data being promoted and the organisation of that data into governed collections. If promotions take a long time or you see many more objects than you expect being promoted, then minimal promotion may reduce the amount of data promoted. There is no problem running with minimal promotion enabled, but the benefits will be seen only when the objects being promoted refer to supporting objects in other governed collections, which may in turn depend on additional governed collections.
Use 64-bit WebSphere Application Server (governance and runtime servers)
A 64-bit application server is strongly recommended for non-test environments, because memory available to the application server can be increased over the 1560 MB limit of a 32-bit server. You may not need to exceed this limit for smaller registries, but adopting a 64-bit server from the outset makes it easier to configure more memory to the WebSphere Application Server JVM in the future should it become necessary.
Increase WebSphere Application Server JVM limits (governance and runtime servers)
When WSRR is installed, the memory limit for the WebSphere Application Server JVM is set to 1024 MB. For promotion of large services, this limit may be too low and impair performance. Therefore it is recommended to check how much of this memory is being used by WebSphere Application Server. For example, you can use the following command on an AIX machine:
Checking amount of memory being used by WebSphere Application Server on AIX
# svmon -P <pid> -O summary=basic,unit=MB Unit: MB ------------------------------------------------------------------------------- Pid Command Inuse Pin Pgsp Virtual 598240 java 2445.38 32.3 0 2207.06 #
pid is the process id of the WebSphere Application Server process. In this example, that process is using 2445 MB of memory.
If the memory being used is close to or greater than the JVM limit, the JVM maximum should be increased (which requires a server restart). For more information, see Tuning the IBM virtual machine for Java in the WebSphere Application Server information center. When changing the maximum JVM limit, it is a good rule to also change the initial JVM limit so that it is 25% of the new maximum. You should also check the settings for both the governance and runtime registries, although from a promotion point of view, the memory requirements will usually be greater on the governance machine.
Timeout settings (governance server)
Promotion of significant amounts of data can sometimes fail due to default timeout limits being exceeded. To make the promotion process as smooth as possible it is recommended to checked the following timeout values before attempting a promotion, and increase them if needed:
- Transaction Service total transaction lifetime timeout (defaults to 120 seconds)
- Transaction Service maximum transaction timeout (defaults to 300 seconds)
- ORB service Request timeout (defaults to 180 seconds)
The WSRR information center recommends increasing the total transaction lifetime timeout to 1200 seconds and the maximum transaction timeout to 3000 seconds. For large promotions, these values may still not be enough and may need to be further increased. More information on preventing timeouts is available in the WSRR and WebSphere Application Server information centers:
- Configuring transaction properties for an application server
- Object Request Broker service settings
- Preventing WSRR timeouts
If promotion is being used via a web service client application, then the WebSphere Application Server client timeout should also be checked.
Set the Java property
com.ibm.websphere.webservices.http.SocketTimeout on the client command line to the required timeout value in seconds.
Here is an example where this timeout is set to 60 minutes when running the Java application
Setting the timeout on a web service client application
java -Dcom.ibm.websphere.webservices.http.SocketTimeout=3600 MyTester.Main
Scheduled tasks (governance and runtime servers)
WSRR scheduled tasks can have an impact on promotion performance if they happen to run at the same time as the promotion. These tasks in particular may have an impact:
- Text search Scheduler
- CleanseDatabase Scheduler
- Subscription Notifier Plug-in Scheduler
- AnalyticsDatabase Scheduler
If you believe a promotion is likely to be resource intensive, then ensure that it does not occur at the same time as these scheduled tasks. To do this, make sure that: the WSRR administrator has scheduled the tasks to be run using with the cron calendar format rather than an interval format. The cron format specifies the precise time at which a task will run, whereas the interval format configures a task to run at a specified time after the last invocation of the task, or whenever the interval parameter is reset. The cron format makes it much easier to tell when a scheduled task will run.
Disable modifiers and validators (runtime server)
On the runtime server, it is not essential to run validation, because the data has already been validated on the governance server. Therefore you can disable validators on runtime registries altogether, or at least disable them before promotion and re-enable them afterwards.
The use of modifiers is similarly not required, because correlated objects will already have been created on the governance server. So you can also disable these during promotion. Changing both of these settings requires the assistance of a WSRR administrator if the promotion is being triggered by an end user.
Disable activity logging (runtime server)
Disabling activity logging on the runtime server, at least during promotion, will improve the promotion performance. But do not disable activity logging on the governance server.
Additional tips to improve promotion performance (governance and runtime servers)
- Make sure that the WebSphere Application Server messaging engine is running. If WSRR data has been moved across databases, it is possible that the WSRR messaging engine may not start correctly, which can impact performance, particularly when JMS notifications are being used.
- If you are using the IBM Java SDK V6 SR 6 or earlier, there may be a benefit in changing the JVM page size to 64K. To do so, specify the option
-Xlp64kin the JVM generic arguments from the WebSphere Administrative Console.
- Disable traces -- the overhead of tracing can be significant so it should always be turned off when not required.
Changing the garbage collection algorithm to use the generational version (gencon) has no major benefit for promotion, so it is recommended not to change this setting from the default.
WSRR governance and runtime databases
A full treatment of database organisation and tuning is beyond the scope of this article, but a few factors can greatly improve WSRR promotion performance.
Check database fix pack level
It is important to ensure that you have a recent DB2 fix pack, because there have been problems loading large WSDL files on DB2 versions prior to V22.214.171.124. To check whether you are using the latest DB2 fix pack, see the Table of DB2 fix packs.
Check that JDBC drivers are current
Ensure the JDBC drivers for the database on the WSRR server are up to date. The drivers can be copied from the
java directory under the DB2 product install root on the database server to configured directory on the WSRR server.
Recommended WSRR database settings
One step that is easy to miss is setting the database options recommended in the WSRR information center. For example, for DB2, ensure that DB2_SKIPINSERTED and DB2_SKIPDELETED are set.
You can check these settings with the
db2set command. For DB2 V9.7 for example, the settings for DB2_SKIPINSERTED and DB2_SKIPDELETED should both be OFF:
Output of the db2set command
db2set DB2_CREATE_DB_ON_PATH=YES DB2_SKIPINSERTED=OFF DB2_INLIST_TO_NLJN=YES DB2_SKIPDELETED=OFF DB2PROCESSORS=0,1 DB2INSTOWNER=SRPERFC DB2PORTRANGE=60000:60003 DB2INSTPROF=C:\PROGRAMDATA\IBM\DB2\DB2COPY1 DB2COMM=TCPIP
For DB2 V9.7, it is also important to check that the currently committed configuration parameter is set to ON. Use the following command:
db2 update db cfg using cur_commit ON
To check the setting of this parameter on Windows:
Verifying cur_commit setting on Windows
db2 get db cfg for WSRR80 | find /I "cur_commit" Currently Committed (CUR_COMMIT) = ON
On Linux and Unix, use the grep command instead of find:
Verifying cur_commit setting on Linux and Unix
$ db2 get db cfg for WSRRTEST | grep -i cur_commit Currently Committed (CUR_COMMIT) = ON
Ensure that the DB2PROCESSORS registry variable is set correctly on Windows
On Windows database platforms, check that you have configured the right number of system processors to be available to DB2. After an install of DB2, it is possible that only two cores per system were made available to DB2. So for example, if you have an eight-core system, DB2 will only use 25% of the available processing power. Use the db2set command to set DB2PROCESSORS. For example, on an eight-core system dedicated to DB2 databases, you might decide to allow DB2 processes on all eight cores:
Setting DB2PROCESSORS registry variable(Windows only)
Restart DB2 for the change to take effect. Ensure that your DB2 license covers the number of processors you want to dedicate to DB2 before increasing the value of DB2PROCESSORS.
Ulimit settings on Linux and Unix
If you are running the database on a Linux or Unix server, check that the default ulimit settings for the database user are set appropriately. Default ulimit settings for the database owner can seriously impair database performance. If the server is dedicated to a single database instance, then set the data, stack, and memory settings as high as possible. For example:
Checking ulimit settings on Linux and Unix
$ ulimit -a time(seconds) unlimited file(blocks) unlimited data(kbytes) unlimited stack(kbytes) 4194304 memory(kbytes) unlimited coredump(blocks) unlimited nofiles(descriptors) 2000 $
Use DB2 RUNSTATS
Execute the RUNSTATS command regularly after loading, changing, or deleting moderate amounts of data. Use RUNSTATS when a database is first created just after loading the profile. If you do not, DB2 will generally not use the indexes to access data from the database, which is likely to slow down performance. For more information, see RUNSTATS command in the WSRR information center.
Re-execute RUNSTATS at regular intervals, particularly after a medium to large amount of data is added to or removed from the database. Doing so helps to ensure that data access continues to be optimised. You can invoke RUNSTATS either from the DB2 command line or from an API call. If you invoke RUNSTATS programmatically, do not do so too frequently. For example, if you execute it repeatedly when loading a series of WSDL files, it will slow down performance. The best time to invoke RUNSTATS is after loading or deleting a batch of data. For promotion, it is important that both the governance and runtime databases have RUNSTATS executed regularly, and if the runtime database is new, it should also have RUNSTATS executed on it just after loading the profile.
Different WSRR versions have different recommended RUNSTATS commands. Check the commands in the WSRR information center, particularly after migrating from one version to another:
- Recommended V6.3 RUNSTATS commands
- Recommended V7.0 RUNSTATS commands
- Recommended V7.5 RUNSTATS commands
- Recommended V8.0 RUNSTATS commands
Use DB2 REORG
It is recommended that you run the DB2 REORG command occasionally on tables and indexes to help reduce fragmentation. It does not need to be run as regularly as RUNSTATS, but it is a good idea to run it when a large amount of data has been added to or removed from the database. Otherwise database inserts will use empty space where it is available, which may not be physically close to associated data in the table.
The DB2 REORGCHK command can also help you to determine whether a REORG command is likely to have a benefit. The commands for executing REORG on key WSRR V6.3, V7.0, V7.5, and V8.0 database tables are shown below. Replace <schema> with your actual WSRR schema name:
Reorg commands for WSRR V6.3 DB2 databases
db2 connect to <database> db2 reorg table <schema>.w_statement db2 reorg table <schema>.w_uri db2 reorg table <schema>.w_obj_lit_plain db2 reorg indexes all for table <schema>.w_statement db2 reorg indexes all for table <schema>.w_uri db2 reorg indexes all for table <schema>.w_obj_lit_plain db2 terminate
Reorg commands for WSRR V7.0 DB2 databases
db2 connect to <database> db2 reorg table <schema>.w_dbversion db2 reorg table <schema>.w_artifact_blob db2 reorg table <schema>.w_statement db2 reorg table <schema>.w_uri db2 reorg table <schema>.w_obj_lit_plain db2 reorg table <schema>.w_mod_lock db2 reorg table <schema>.w_schema_rev db2 reorg table <schema>.w_obj_lit_any db2 reorg table <schema>.w_lit_float db2 reorg table <schema>.w_lit_double db2 reorg indexes all for table <schema>.w_dbversion db2 reorg indexes all for table <schema>.w_artifact_blob db2 reorg indexes all for table <schema>.w_statement db2 reorg indexes all for table <schema>.w_uri db2 reorg indexes all for table <schema>.w_obj_lit_plain db2 reorg indexes all for table <schema>.w_mod_lock db2 reorg indexes all for table <schema>.w_schema_rev db2 reorg indexes all for table <schema>.w_obj_lit_any db2 reorg indexes all for table <schema>.w_lit_float db2 reorg indexes all for table <schema>.w_lit_double db2 terminate
Reorg commands for WSRR V7.5 and V8.0 DB2 databases
db2 connect to <database> db2 reorg table <schema>.statement db2 reorg table <schema>.subject db2 reorg table <schema>.predicate db2 reorg table <schema>.object db2 reorg table <schema>.graph db2 reorg table <schema>.blobdata db2 reorg indexes all for table <schema>.statement db2 reorg indexes all for table <schema>.subject db2 reorg indexes all for table <schema>.predicate db2 reorg indexes all for table <schema>.object db2 reorg indexes all for table <schema>.graph db2 reorg indexes all for table <schema>.blobdata db2 terminate
For promotion, it is recommended that you run REORG on the governance and runtime databases if large amounts of data have been added or removed to them since the last REORG or when the databases were created. For more information, see Determining when to reorganize tables and indexes in the DB2 information center.
Decouple the database, transaction logs, and operating system (governance and runtime databases)
It is recommended that:
- Production WSRR databases are hosted on different disks from the operating system.
- Database transaction logs are hosted on different disks from both the operating system and the database.
Separating databases and transaction logs in this way can significantly improve promotion performance.
The WSRR profile creation mechanism does not permit the locations of the transaction logs and database to be customised before the database scripts are created. However, the database scripts built by the profile management tool can be edited to customise the transaction log and database locations before they are run to create the database.
Alternatively, a DB2 database can have its database and transaction log paths moved after the database has been created. WebSphere Application Server must be restarted, and moving the database to a different drive requires the database to be rebuilt from a backup. To move the database and transaction logs:
- Shut down WebSphere Application Server.
- Shut down the WSRR database.
- Back up the database. For example:
db2 backup database to <backup-directory>.
- Drop the database.
- Restore the database to a different drive. For example:
db2 restore database <database> from <backup-directory> TO <new-database-path>.
- Move the transaction logs to a different drive. For example:
db2 update db cfg for WSRR using NEWLOGPATH <new-log-path>.
- Restart the WSRR database.
- Restart WebSphere Application Server.
Use RAID disk arrays
In addition to locating the operating system, database, and transaction logs on different disks, using RAID 0 disk arrays can also improve database performance. It is recommended that both the transaction logs and database be hosted on separate RAID disk arrays. Using RAID 0 means that the physical I/O to the drives can be executed concurrently.
To optimise DB2 performance it is important to know the stripe size of the RAID array. The stripe size is the minimum allocation on the RAID array, and is equal to the segment size times the number disks in the RAID array, where the segment size is the size of data written onto one disk before proceeding to the next disk in the stripe set. For example, if the segment size is 128 KB and there are three disks in the RAID array, the stripe size would be 384 KB. Your system administrator should be able to tell you the stripe size for your RAID array.
After you determine the stripe size, you should then consider the extent sizes of the WSRR table spaces. An extent is a block of storage within a DB2 table space container, and an extent size is the number of pages of data that will be written to a container in the database before writing to the next container.
It is recommended that you set the extent size of the WSRR table spaces to match the stripe size of a RAID array, expressed as a number of table space pages. For example, if a RAID array with a segment size of 384 KB hosts a tablespace with a page size of 32768 bytes, the extent size for that table space should be set to 12. The extent size can be set only when the table spaces are created. For existing WSRR databases, reconfiguring the extent size involves creating a new database with the correct extent size and then using either the db2move command or the WSRR migration feature to populate the new database.
For WSRR V7.5 and 8.0, there are four table spaces whose extent sizes should be considered: ATHSTMTTS, ATHDATATS, WSRRTS, and WSRRTEMP. By default, the database creation scripts by creates each of these table spaces as shown below:
Figure 5. Default WSRR page size and extent size settings
You can see that the same extent size is used for all the table spaces, but the page size varies. To optimise DB2 extents to match a RAID array, set EXTENTSIZE as shown below:
Formula for EXTENTSIZE
EXTENTSIZE = stripe-size / page-size
For new WSRR databases, it is easiest to set the extent sizes of these table spaces by editing the script CREATEWSRRTABLESPACE.SQL before it is run. Here is an extract of a script that has been modified to change the extent size of the ATHSTMTTS table space to match our example RAID array:
Changing EXTENTSIZE in CREATEWSRRTABLESPACE.SQL
CREATE LARGE TABLESPACE ATHSTMTTS PAGESIZE 4K MANAGED BY AUTOMATIC STORAGE AUTORESIZE YES INITIALSIZE 100M INCREASESIZE 10 PERCENT MAXSIZE NONE EXTENTSIZE 96 BUFFERPOOL ATHSTMTBP FILE SYSTEM CACHING;
The settings above are illustrative only, and different RAID arrays are likely to result in different optimal extent size settings.
When DB2 is performing sequential access to data, performance can be improved by fetching multiple consecutive pages into memory with a single I/O operation. The PREFETCHSIZE parameter specifies how much data is read in these conditions. It is recommended that you optimise PREFETCHSIZE for each table space when using RAID devices. Set this parameter according to the following formula:
Suggested formula for PREFETCHSIZE
PREFETCHSIZE = (stripe size * the number of RAID devices * number of DB2 containers) / page-size
You can see that the value of PREFETCHSIZE is a multiple of the recommended setting for EXTENTSIZE. PREFETCHSIZE can be changed easily with the following DB2 command:
ALTER TABLESPACE <tablespace-name> PREFETCHSIZE n
where n is the new number of pages for the PREFETCHSIZE. It is not necessary to restart WSRR or the database when changing this parameter.
Database tuning for WSRR
A general treatment of database tuning is beyond the scope of this article. This section describes some basic database adjustments that improve WSRR promotion performance. Before making any changes to your database environment, consult your local DBA.
Establish the maximum memory for each WSRR database
The first thing to establish is how much of the database server is dedicated to WSRR and how many WSRR databases are being served from the system. For example, you might decide to dedicate a complete database server to one WSRR governance database and one runtime database. Ideally, the server would be dedicated to WSRR, but this may not be possible and the server may host other databases and applications. Either way, it is important to decide how much system memory can be dedicated to each WSRR database.
Consider a case where a server is dedicated only to the governance and runtime databases. In this case, you might decide to allow the governance registry to consume up to 60% of system memory and the runtime database up to 30% of system memory.
Establish the number of statements within a single load transaction
It is important to establish the complexity of your data in terms of an upper limit on the number of SQL statements within a single transaction. To do so, first highlight a WSDL that is one of the more complex ones you are likely to populate registry with. Then estimate the number of statements within a transaction by loading the service onto a test system and using one of the following techniques:
- Use a WebSphere Application Server trace and use the trace string com.ibm.athene.query*=all.
- Use db2diag.
- Analyse DB2 snapshots just before and after loading your typical WSRR artifact.
With trace and db2diag it may not be easy to work out the total number of statements, and you may find the use of DB2 snapshots to be easier. To use snapshots to make this calculation, first create a snapshot prior to loading the WSDL, then load the WSDL and then create another snapshot afterwards, as shown below:
Using DB2 snapshots
db2 get snapshot for database on WSRR75L > db2_wsrr75l_snapshot_pre_load.txt . . . . Load WSDL . . . db2 get snapshot for database on WSRR75L > db2_wsrr75l_snapshot_post_load.txt
You can then scan both files for
rows inserted and subtract the corresponding values from each other, for example:
Determining number of rows inserted
find "Rows inserted" db2_wsrr75l_snapshot_pre.txt ---------- DB2_WSRR75L_SNAPSHOT_PRE.TXT Rows inserted = 1 find "Rows inserted" db2_wsrr75l_snapshot_post.txt ---------- DB2_WSRR75L_SNAPSHOT_POST.TXT Rows inserted = 395700
In this case, 395699 rows were inserted. This number is purely an example and you should check your own data to see what the value is for your environment. Ensure there is minimal other database activity when you do this calculation.
Run the AUTOCONFIGURE utility on the database
xxx Having established the maximum memory for each WSRR database and estimated a typical number of statements within a write transaction, the next step in tuning database performance is to use the DB2 AUTOCONFIGURE command, giving it the options you have chosen for maximum system memory and statements within a transaction. Run it first in a report mode, in which it will tell you the settings it is recommending but not make any changes. If you agree with the recommendations then rerun it and apply the changes.
Consider the earlier example where a maximum of 60% of system memory was dedicated to the governance registry and 30% to the runtime registry. To run AUTOCONFIGURE in report only mode you should first connect to the database and then issue the command as follows:
Running the DB2 AUTOCONFIGURE command
db2 autoconfigure using mem_percent 60 workload_type simple num_stmts 500000 admin_priority performance is_populated no apply none > db2_changes.txt
APPLY NONE clause tells AUTOCONFIGURE to not make any actual changes. The recommendations from the utility are saved to a file, so you can inspect It and see what DB2 will change when the configuration is applied. In this example, the database is assumed to be empty. If your database is populated, then use
is_populated yes. Once you have reviewed the changes save a copy of the current database configuration, for example:
Save a copy of the DB2 database config
db2 get db cfg for WSRR0 > WSRR80_config.txt
You should then take a DB2 backup of the database before rerunning in apply mode, in case you later to decide to revert to the original settings. You can now re-run AUTOCONFIGURE to apply the changes with a command such as:
Running the DB2 AUTOCONFIGURE command
db2 autoconfigure using mem_percent 60 workload_type simple num_stmts 500000 admin_priority performance is_populated no apply db and dbm
After executing this command, restart the database and check the INSTANCE_MEMORY variable to see how AUTOCONFIGURE has changed the maximum memory available. For example:
Verifying new instance memory
db2 get db manager config | grep INSTANCE_MEMORY
By comparing the setting of this variable to the system memory, you can verify that the correct percentage has been applied.
After using DB2 AUTOCONFIGURE, you may consider further tuning, which are done then so that your customised choices do not get reset. Consider the following tuning parameters:
The setting of DBHEAP, which is the maximum amount of DB2 heap memory, may be restricted on new databases built from the scripts generated by WSRR. It is recommended to set it to AUTOMATIC:
Setting DBHEAP to AUTOMATIC
db2 update db cfg for WSRR80 using DBHEAP AUTOMATIC
Increasing LOGBUFSZ can dramatically reduce the amount of data that is written out to transaction logs whilst complex transactions are in progress, such as in promotion. Increase it if you anticipate a complex promotion operation with a resulting large number of database statements to be executed within a single transaction. For example:
db2 update db cfg for WSRR80 using LOGBUFSZ 32000
LOGFILSIZ determines the size of primary and secondary transaction log files in pages. By default it may be quite small (such as 8 MB per file), which can result in large numbers of transaction logs being written to concurrently, creating an I/O bottleneck. You can increase it as shown below:
db2 update db cfg for WSRR80 using LOGFILSIZ 32000
With a 4 KB page size, each transaction log would be 125 MB.
You should also look at the number of configured primary and secondary log files, which are specified with the DB2 LOGPRIMARY and LOGSECOND parameters. Primary log files are pre-allocated on the file system and secondary log files are created when they are needed. With larger transaction log files, it might therefore be prudent to not allocate too many primary log files. On the other hand there, is a performance penalty when secondary log files are newly allocated, so you do not want to reduce the number of primary logs too much. The ideal is to have the primary log space sufficient for day-to-day running, with the secondary log space large enough to support usage peaks when required. You can change the setting for the number of primary and secondary logs as shown below:
Setting number of primary and secondary log files
db2 update db cfg for WSRR80 using LOGPRIMARY 20 db2 update db cfg for WSRR80 using LOGSECOND 230
One other area that you may want to consider is disabling automatic DB2 maintenance tasks, which can help prevent a performance impact at inconvenient times, as shown below.. If you disable automatic maintenance, make sure that you have your own processes in order to make sure that you run regular RUNSTAT and occasional REORG commands.
Disabling automatic maintenance
db2 update db cfg for WSRR80 using AUTO_MAINT OFF
Disable file system caching
You can get a modest performance gain by disabling file system caching on DB2 WSRR tablespaces, but only when DB2 bufferpools are allowed to grow automatically. This will not be a problem unless bufferpool tuning has been changed, as the profile management tool and the database scripts automatically enable automatic bufferpool sizing. Here are the commands to disable file system caching on WSRR V7.5 and V8.0 tablespaces:
Commands to disable file system caching
db2 connect to <database> db2 alter tablespace ATHSTMTTS no file system caching db2 alter tablespace ATHDATATS no file system caching db2 alter tablespace ATHTEMPTS no file system caching db2 alter tablespace WSRRTS no file system caching db2 alter tablespace WSRRTEMP no file system caching db2 terminate
The network between WSRR and its database servers
Promotion of a medium-sized service could easily result in a total of 100 MB of data being transferred between the governance and service registries and their associated databases. Although by today's standards this isn't an unusually large amount of data, bottlenecks in network performance can affect the promotion time. Bandwidth, latency, and congestion are all significant factors in network performance, and all three should be investigated, particularly when the state of the network is not well known. Ideally, the database should be connected to the WSRR server by a high-speed, low-latency network. But that is not always realistic, and the database may need be located remotely, or hosted on a limited-bandwidth, shared network.
Bandwidth, latency, and congestion
Bandwidth is the network capacity -- the rate at which data can be pushed across it. It is usually measured in millions of bits per second (Mbs), and can be thought of as the width of the network pipeline.
Latency is the time it takes for one piece of data to traverse the network. It is usually measured in milliseconds (ms), and can be thought of as the length of the network pipeline.
The difference between bandwidth and latency is illustrated in Figure 6:
Figure 6. Bandwidth, latency, and the network pipeline
Congestion is the effect of heavy network usage, and can cause delays in propagating data, loss of packets resulting in packet retransmission, and inability to make connections. Packet loss is usually measured as a percentage. TCP can normally handle losses up to 0.1% with little direct impact, but when losses exceed 0.1% the effect can be more severe.
For healthy WSRR promotion performance, the network should have high bandwidth, low latency, and minimal packet loss.
Measuring network bandwidth
It is a good idea to measure the bandwidth achievable over the network connection between WSRR and the database. Although the connection may offer a theoretical maximum speed, such as 1 Gbs or 10 Gbs, the actual bandwidth may be significantly lower due to congestion, network losses, and hardware problems.
One technique to measure bandwidth is to copy a large file across the network and then calculate the bandwidth from the file size and transfer time. But this technique is likely to significantly underestimate the actual throughput because of disk I/O overhead on both ends, protocol overheads, and network congestion resulting in packet loss and retransmission, none of which would be seen at the application level as in a file copy. Disk I/O overheads can be mitigated by using FTP reading and writing to none file system devices, such as /dev/zero and /dev/null.
A better way is to measure bandwidth is to use a tool like Test TCP (TTCP), which measures the transmission of data between two systems, with one configured as a receiver and one as a transmitter. Data copy is performed memory-to-memory between the systems, and therefore avoids the impact of disk I/O. TTCP was written by Terry Slattery and Mike Muuss in the early days of TCP/IP, and the original source code is now freely available in the public domain. To use TTCP, start it on the target with the -r and -s options:
Running TTCP on the target
ttcp -r -s -f m
-f m option means that throughput measurements will be displayed in Mbs. Next run TTCP on the transmitting host with the -t and -s options:
Running TTCP on the sender
ttcp -t -s -n 50000 fred.bloggs.com
Here the command is used with
-n 50000, to indicate that 50000 packets are going to be sent to the receiver. When the data is received, the receiver displays a summary like this:
TTCP receiver output
ttcp -r -s -f m ttcp-r: buflen=8192, nbuf=2048, align=16384/0, port=5001 tcp ttcp-r: socket ttcp-r: accept from ww.xx.yy.zz ttcp-r: 409600000 bytes in 0.50 real seconds = 6252.60 Mbit/sec +++ ttcp-r: 50001 I/O calls, msec/call = 0.01, calls/sec = 100043.62 ttcp-r: 0.0user 0.3sys 0:00real 77%
So in this test, which was executed over a 10 Gbs link, you can see that 409 MB of data (50000 * 8192 byte packets) was transferred between the two systems over 0.50 seconds, and that the effective bandwidth was 6252 Mbs. If you your effective bandwidth is low or significantly less than the total network bandwidth, consult your network administrator.
Measuring network latency
Most WSRR instances and their associated database servers are closely networked together with a resultant low latency, usually under 1 ms. It is recommended that the systems be near each other, but proximity is not always possible, and some WSRR servers are in a different location than their database servers. In this situation, network latency is even more important. You can measure network latency between two systems using the ping command, which measures the time taken to send a small packet of data across the network and receive a reply. Ping uses the ICMP protocol and doesn't incur the overhead of the TCP protocol. An example:
Using ping to measure network latency
ping news.bbc.co.uk Pinging newswww.bbc.net.uk [126.96.36.199] with 32 bytes of data: Reply from 188.8.131.52: bytes=32 time=36ms TTL=56 Reply from 184.108.40.206: bytes=32 time=19ms TTL=56 Reply from 220.127.116.11: bytes=32 time=20ms TTL=56 Reply from 18.104.22.168: bytes=32 time=21ms TTL=56 Ping statistics for 22.214.171.124: Packets: Sent = 4, Received = 4, Lost = 0 (0% loss), Approximate round trip times in milliseconds: Minimum = 19ms, Maximum = 36ms, Average = 24ms
In this case you can see that the average round-trip time was 24 ms, indicating a network latency of 12 ms. This example was measured across a domestic broadband connection. Latencies between machines on the same subnet should be under 1 ms. On some systems, ping only resolves round-trip times to the nearest millisecond, preventing accurate measure of sub 1 ms round-trip times. In this situation, use a high resolution open-source ping utility. If you consistently see latencies over 1 ms on a local network, consult your network administrator.
The ping utility displays the packet loss on the ICMP packets sent across the network. You can see packet loss statistics at the end of the example above (Lost = 0). However, this measurement is based only on these packets being transferred from and to the ping utility and do not measure any other packets traversing the network. On some systems, ping also supports an option to write ICMP packets as fast as possible onto the network, and again will give you the loss statistics at the end -- a so-called flood ping. Use it with caution because it can impact general network performance. Run it only for a few seconds and only under test conditions,
You can also use the NETSTAT utility to see how many packets have been dropped on a network. For example, here is the output from
netstat -D on AIX:
Using netstat to report packet drops (AIX)
# netstat -D Source Ipkts Opkts Idrops Odrops ------------------------------------------------------------------------------- ent_dev0 26820670 16079610 0 0 --------------------------------------------------------------- Devices Total 26820670 16079610 0 0 ------------------------------------------------------------------------------- ent_dd0 26820670 16079610 0 0 --------------------------------------------------------------- Drivers Total 26820670 16079610 0 0 ------------------------------------------------------------------------------- ent_dmx0 26820664 N/A 6 N/A --------------------------------------------------------------- Demuxer Total 26820664 N/A 6 N/A ------------------------------------------------------------------------------- IP 50335154 44013816 1110616 147099 IPv6 1473 1473 0 0 TCP 43110957 40986657 6235 0 UDP 6105902 2045877 5022268 0 --------------------------------------------------------------- Protocols Total 99552013 87046350 6139119 147099 ------------------------------------------------------------------------------- en_if0 26820664 16079528 0 0 lo_if0 27970975 27975366 4652 0 --------------------------------------------------------------- Net IF Total 54791639 44054894 4652 0 ------------------------------------------------------------------------------- NFS/RPC Client 23 N/A 0 N/A NFS/RPC Server 0 N/A 0 N/A NFS Client 14949 N/A 8 N/A NFS Server 0 N/A 0 N/A --------------------------------------------------------------- NFS/RPC Total N/A 14977 8 0 ------------------------------------------------------------------------------- (Note: N/A -> Not Applicable) #
On Windows, the
netstat -s command can be used instead.
On AIX, the
netstat -Zs -p tcp command can be used to reset the protocol statistics before executing promotion activity.
If packet drops are consistently in excess of 0.1%, discuss the issue with your network administrator.
Network bandwidth and latency
A high bandwidth but high latency network can result in many data packets being in flight over the network at the same time. In such situations, it is important for the operating system to buffer the data adequately on both ends of the connection, in order to maximise the number of packets in the pipeline. You may need to change the size of the TCP Send and Receive buffers. For more information, see your operating system documentation. For information on this topic on AIX, see TCP and UDP performance tuning in the AIX information center.
This article provided guidelines to help you maximise the speed and efficiency of the WSRR promotion process, and improve the overall performance of WSRR. It showed you how to check and improve WSRR, DB2 database, and network performance as needed.
The author would like to thank the following individuals for contributring to and/or helping to review this article: Tim Baldwin, Sukesh Chulliyote, John Colgrave, Oliver Dineen, Sarah Eggleston, Steve Groeger, Ian Heritage, Brian Hulse, Chris Jenkins, Kevin Marsh, Anna Pendrich, and Mark S. Taylor.
- WebSphere Service Registry and Repository resources
- WebSphere Service Registry and Repository information center
A single Web portal to all WebSphere Service Registry and Repository documentation, with conceptual, task, and reference information to help you install, configure, and use the product.
- WebSphere Service Registry and Repository developer resources page
Technical resources to help you use WebSphere Service Registry and Repository.
- WebSphere Service Registry and Repository product page
Product descriptions, product news, training information, support information, and more.
- WebSphere Service Registry and Repository requirements
Hardware and software requirements.
- Getting started with WebSphere Service Registry and Repository
This developerWorks article shows you how to populate WebSphere Service Registry and Repository with existing Web services information.
- WebSphere Service Registry and Repository topologies explained
This developerWorks article shows you how to configure WebSphere Service Registry and Repository using various topologies.
- YouTube channel: WebSphere Service Registry and Repository demos
These short video demos show you how to complete several key service governance tasks using WebSphere Service Registry and Repository.
- WebSphere Service Registry and Repository Information
This wiki provides an alternative portal for quick access to a wide variety of WebSphere Service Registry and Repository resources, and also makes it easy for you to give feedback on the product.
- IBM Redbook: WebSphere Service Registry and Repository Handbook
This IBM Redbook discusses the architecture and functions of Service Registry, along with sample integration scenarios that you can use to implement Service Registry in an SOA.
- WebSphere Service Registry and Repository support
A searchable database of support problems and their solutions, plus downloads, fixes, and problem tracking.
- Best practices in DB2 database storage
This developerWorks article describes best practices for DB2 database storage.
- The story of the Test TCP (TTCP) utility
A description of the TTCP utility written by Mike Muuss.
- WebSphere Service Registry and Repository information center
- WebSphere resources
- developerWorks WebSphere developer resources
Technical information and resources for developers who use WebSphere products. developerWorks WebSphere provides product downloads, how-to information, support resources, and a free technical library of more than 2000 technical articles, tutorials, best practices, IBM Redbooks, and online product manuals.
- developerWorks WebSphere application integration developer resources
How-to articles, downloads, tutorials, education, product info, and other resources to help you build WebSphere application integration and business integration solutions.
- Most popular WebSphere trial downloads
No-charge trial downloads for key WebSphere products.
- WebSphere forums
Product-specific forums where you can get answers to your technical questions and share your expertise with other WebSphere users.
- WebSphere on-demand demos
Download and watch these self-running demos, and learn how WebSphere products and technologies can help your company respond to the rapidly changing and increasingly complex business environment.
- WebSphere-related articles on developerWorks
Over 3000 edited and categorized articles on WebSphere and related technologies by top practitioners and consultants inside and outside IBM. Search for what you need.
- developerWorks WebSphere weekly newsletter
The developerWorks newsletter gives you the latest articles and information only on those topics that interest you. In addition to WebSphere, you can select from Java, Linux, Open source, Rational, SOA, Web services, and other topics. Subscribe now and design your custom mailing.
- WebSphere-related books from IBM Press
Convenient online ordering through Barnes & Noble.
- WebSphere-related events
Conferences, trade shows, Webcasts, and other events around the world of interest to WebSphere developers.
- developerWorks WebSphere developer resources
- developerWorks resources
- Trial downloads for IBM software products
No-charge trial downloads for selected IBM® DB2®, Lotus®, Rational®, Tivoli®, and WebSphere® products.
- developerWorks business process management developer resources
BPM how-to articles, downloads, tutorials, education, product info, and other resources to help you model, assemble, deploy, and manage business processes.
- developerWorks blogs
Join a conversation with developerWorks users and authors, and IBM editors and developers.
- developerWorks tech briefings
Free technical sessions by IBM experts to accelerate your learning curve and help you succeed in your most challenging software projects. Sessions range from one-hour virtual briefings to half-day and full-day live sessions in cities worldwide.
- developerWorks podcasts
Listen to interesting and offbeat interviews and discussions with software innovators.
- developerWorks on Twitter
Check out recent Twitter messages and URLs.
- IBM Education Assistant
A collection of multimedia educational modules that will help you better understand IBM software products and use them more effectively to meet your business requirements.
- Trial downloads for IBM software products