Contents


IBM WebSphere Portal Web Content Manager and DB2 Tuning Guide

Performance tuning the environment

Tuning a WebSphere Portal environment involves tuning and configuring the various systems and components of the environment. This section discusses some general concepts and details the specifics of the configuration used in my measurement environment. The tuning and configuration for the WebSphere Portal Web Content Management (WCM) AIX® Power4 measurement environment is based on the WebSphere Portal AIX Power4 environment detailed in the IBM WebSphere Portal Version 6.0 Tuning Guide. All differences in the environment used to measure WCM are explicitly mentioned in this chapter. The overall tuning and configuration approach to any WebSphere Portal environment includes:

  • Configuring the application server and the resources defined for that application server
  • Determining the cloning strategy for expanding or extending the environment
  • Tuning the database(s) and database server
  • Tuning the directory server and its database
  • Tuning the web server
  • Tuning the operating system and network
  • Tuning the WebSphere Portal services

When tuning your individual systems, it is important to begin with a baseline, monitor the performance metrics to determine if any parameters should be changed and, when a change is made, monitor the performance metrics to determine the effectiveness of the change.

Understanding the environment

WebSphere Portal V6.0 uses additional servers to provide its functionality. In my measurement environment, there is a web server, database server, and directory server in addition to the portal server itself. For maximum performance, these servers should reside on separate systems from the WebSphere Portal system. The primary benefit of having such a configuration is to avoid resource contention from multiple servers residing on a single system. Additional servers contending with the WebSphere Portal server for system resources impacts the system's achievable throughput. The configuration used for the measurements in this report had the IBM HTTP Server on the same system as WebSphere Portal.

Application Server tuning

There are many aspects to configuring and tuning an application server in WebSphere Application Server. I found that those presented here, and in the IBM WebSphere Portal Version 6.0 Tuning Guide, are critical to a correctly functioning and optimally performing WebSphere Portal in our laboratory environment. For more details on tuning a WebSphere Application Server, see the Tuning Section of the information center.

Table 1 shows settings that, based on my experience with the workloads described in this document, differed from those in the IBM WebSphere Portal Version 6.0 Tuning Guide for AIX on the Power4 platform:

Table 1. Application server parameters
ParameterSettingAdditional Details
Java™ Virtual Machine (JVM) heap size1792Remember that the value of the JVM heap size is directly related to the amount of physical memory on the system. Never set the JVM heap size larger than the physical memory on the system.
see JVM Max Heap Size Limits
JVM heap large page-XlpUsed with the IBM JVM to allocate the heap using large pages.
see JVM Heap Large page
kCluster and pCluster-Xk30000
-Xp24000k,2400k
Pinned clusters. Pre-allocates JVM heap for class files, since they are otherwise pinned in memory once loaded.
see kCluster and pCluster

JVM max heap size limits

When setting the heap size for an application server, keep the following in mind: Make sure that the system has enough physical memory for all of the processes to fit into physical memory, plus enough for the operating system. When more memory is allocated than the physical memory in the system, paging occurs, and this can result in very poor performance.

While I set the minimum and maximum heap sizes to the same values, this may not be the best choice for production systems which are running on IBM JDKs. In my measurement runs, the system is under load for a relatively short time (around 3 hours), and it is running with portlets which do not have large memory requirements. When using portlets which have larger memory requirements, or for continuous operation, it may be possible to reduce heap fragmentation by setting the initial heap size to 320 megabytes.

After doing any tuning of heap sizes, monitor the system to make sure that paging is not occurring. As mentioned above, paging can cause poor performance.

How to set the parameter:

In the WebSphere Administrative Console, choose Servers -> Application Servers -> WebSphere Portal -> Server Infrastructure: Java and Process Management -> Process Definition -> Java Virtual Machine
You can change the heap size in these two places:
- Initial Heap Size
- Maximum Heap Size

JVM heap large page

This setting can be used with the IBM JVM to allocate the heap using large pages. An AIX operating system must be configured to support large pages. Using large pages can reduce the CPU overhead needed to keep track of heap. With this tuning, I have seen 10% throughput improvement in my measurements.

How to set the parameter:

  1. In the WebSphere Administrative Console, select Servers -> Application Servers -> WebSphere Portal -> Server Infrastructure: Java and Process Management -> Process Definition -> Java Virtual Machine -> Generic JVM Argument
    Add: -Xlp
  2. In the WebSphere Administrative Console, choose Servers -> Application Servers -> WebSphere Portal -> Server Infrastructure: Java and Process Management -> Process Definition -> Custom Properties -> New -> EXTSHM=OFF
    (Note: When EXTSHM is on, it prevents the use of large page.)
  3. Stop Portal server
  4. Configure AIX to support large pages. I used the following steps to allocate 1856 MB of RAM as large pages (16MB). I chose this amount based on having 4GB of physical memory in these systems. These values may need to be adjusted on systems with different amounts of physical memory.
    vmo -r -o lgpg_regions=116 -o lgpg_size=16777216
    bosboot -a
    reboot -q
    vmo -p -o v_pinshm=1
    chuser capabilities=CAP_BYPASS_RAC_VMM,CAP_PROPAGATE $USER
  5. Restart the Portal Server. To verify if large pages are being used, run the AIX command vmstat -l 1 5 and check the "alp" column which is the active large page used. It should be a non-zero value if large pages are being used.

kCluster and pCluster

Objects that are on the JVM heap are usually mobile; that is, the Garbage Collector (GC) can move them around if it decides to re-sequence the heap. Some objects, however, cannot be moved either permanently or temporarily. Such immovable objects are known as pinned objects.

In release 1.3.1 Service Refresh 7 and above, the GC allocates a kCluster as the first object at the bottom of the heap. A kCluster is an area of storage that is used exclusively for class blocks. It is large enough to hold 1280 entries. Each class block is 256 bytes long.

The GC then allocates a pCluster as the second object on the heap. A pCluster is an area of storage that is used to allocate any pinned objects. It is 16 KB long.

When the kCluster is full, the GC allocates class blocks in the pCluster. When the pCluster is full, the GC allocates a new pCluster of 2 KB.

Because this new pCluster can be allocated anywhere in the heap and must be pinned, it can cause fragmentation problems. The pinned objects effectively deny the GC the ability to combine free space during heap compaction and could result in a heap that contains a lot of free space but in relatively small amounts, so that an allocation that appears to be well below the total free heap space will fail. To solve this problem, release 1.3.1 at SR7, and later, provides command-line options to specify the kCluster (-Xk), pCluster (-Xp) and pCluster overflowsize (-Xp). Use these options to set the initial sizes of the clusters to be large enough to avoid fragmentation issues.

How to set the parameter:

In the WebSphere Administrative Console, select Servers -> Application Servers -> WebSphere Portal -> Server Infrastructure: Java and Process Management -> Process Definition -> Java Virtual Machine -> Generic JVM Argument
Enter -Xk30000 -Xp24000k,2400k

Datasource tuning

As is described in the WebSphere Portal information center, multiple databases are used with WebSphere Portal V6.0. In my example, I used seven separate databases, each with their own datasources. These are:

Table 2. Datasource names
DatabaseDatabase nameDatasource name
ReleasereleasereldbDS
CommunitycommunitycommdbDS
CustomizationcustomcusdbDS
FeedbackfdbkdbfdbkdbDS
LikemindslmdblmdbDS
JCRjcrdbjcrdbDS
Member ManagerwmmdbwmmdbDS

For the prepared statement cache size, the path is Resources -> JDBC Providers -> provider name -> Data Sources -> datasource name -> WebSphere Application Server data source properties. The provider name and datasource name are based on the names selected for that database during the database transfer step. Look for the parameter statement cache size.

We set the prepared statement cache size to 1 statement for all Datasources to reduce the demands on native memory, thus avoiding crashes.

DB2 registry variables

The following registry variables should be set (by using the db2set command) at the instance level:

Table 3. DB2 registry variables explained
Registry variableDescription
DB2_RR_TO_RSThis parameter is deprecated since DB2 v8.2. If you don't get an error when trying to set the parameter in DB2 higher than version 8.2, it is okay to have it set. If you get an error, never mind. The next two variables are the replacement for it.
When DB2_RR_TO_RS is on, RR behavior cannot be guaranteed for scans on user tables because next key locking is not done during index key insertion and deletion. Catalog tables are not affected by this option. The other change in behavior is that with DB2_RR_TO_RS on, scans will skip over rows that have been deleted but not committed, even though the row may have qualified for the scan.
DB2_EVALUNCOMMITTEDWhen enabled, this variable allows, where possible, table or index access scans to defer or avoid row locking until a data record is known to satisfy predicate evaluation. DB2_EVALUNCOMMITTED is applicable only to statements using either Cursor Stability or Read Stability isolation levels. For index scans, the index must be a type-2 index. Furthermore, deleted rows are skipped unconditionally on table scan access while deleted keys are not skipped for type-2 index scans unless the registry variable DB2_SKIPDELETED is also set.
DB2_SKIPDELETEDWhen enabled, this variable allows statements using either Cursor Stability or Read Stability isolation levels to unconditionally skip deleted keys during index access and deleted rows during table access. With DB2_EVALUNCOMMITTED enabled, deleted rows are automatically skipped, but uncommitted pseudo-deleted keys in type-2 indexes are not skipped unless DB2_SKIPDELETED is also enabled.
DB2_INLIST_TO_NLJNSometimes the optimizer does not have accurate information to determine the best join method for the rewritten version of the query. This can occur if the IN list contains parameter markers or host variables which prevent the optimizer from using catalog statistics to determine the selectivity. This registry variable causes the optimizer to favor nested loop joins to join the list of values, using the table that contributes the IN list as the inner table in the join.
DB2_MINIMIZE_LISTPREFETCHNecessary to avoid an inefficient access plan for a common query on one of the tables in the JCR database.

As the instance user, enter the following commands to set the DB2 registry variables:

db2set DB2_RR_TO_RS=YES
db2set DB2_EVALUNCOMMITTED=YES
db2set DB2_SKIPDELETED=ON
db2set DB2_INLIST_TO_NLJN=YES
db2set DB2_MINIMIZE_LISTPREFETCH=ON

Optional variables

If the system where DB2 is running is CPU bound, then the following parameter can be set as well. Since this variable affects all statements that have more than 5 joins involved, it should be used with caution. This parameter can help to reduce time and resource usage during optimization. Although optimization time and resource use might be reduced, the risk of producing a less than optimal data access plan is increased.

DB2_REDUCED_OPTIMIZATION=5

Attention: This parameter should only be set when explicitly advised by IBM.

Database manager configuration parameters

Table 4 shows the database manager configuration parameters:

Table 4. Database manager configuration parameters
Parameter nameValue
QUERY_HEAP_SZ32768
MAXAGENTS1000
SHEAPTHRES50000
HEALTH_MONOFF
ASLHEAPSZ60
RQRIOBLK65535

As the instance user, enter the following commands to update the database manager configuration:

db2 "update dbm cfg using query_heap_sz 32768"
db2 "update dbm cfg using maxagents 1000"
db2 "update dbm cfg using sheapthres 50000"
db2 "update dbm cfg using health_mon off"
db2 "update dbm cfg using aslheapsz 60"
db2 "update dbm cfg using rqrioblk 65535"
db2 "update dbm cfg using federated no"

Note: If you need federated database support, you must not set FEDERATED to NO.

Database configuration parameters

Parameters for all databases

Table 5 shows the database configuration parameters you should set for all databases:

Table 5. Database configuration parameters for all databases
Parameter nameValue
APPLHEAPSZ4096
APP_CTL_HEAP_SZ1024
STMTHEAP8192
DBHEAP2400
LOCKLIST1000
LOGFILSIZ1000
LOGPRIMARY12
LOGSECOND20
LOGBUFSZ128
AVG_APPLS5
LOCKTIMEOUT30
MAXLOCKS70
MAXAPPLSAUTOMATIC

As the instance user, enter the following commands to update the database configuration for all databases. Remember to change DBNAME to the actual database name:

db2 "update db cfg for DBNAME using applheapsz 4096"
db2 "update db cfg for DBNAME using app_ctl_heap_sz 1024"
db2 "update db cfg for DBNAME using stmtheap 8192"
db2 "update db cfg for DBNAME using dbheap 2400"
db2 "update db cfg for DBNAME using locklist 1000"
db2 "update db cfg for DBNAME using logfilsiz 1000"
db2 "update db cfg for DBNAME using logprimary 12"
db2 "update db cfg for DBNAME using logsecond 20"
db2 "update db cfg for DBNAME using logbufsz 128"
db2 "update db cfg for DBNAME using avg_appls 5"
db2 "update db cfg for DBNAME using locktimeout 30"
db2 "update db cfg for DBNAME using maxlocks 70"
db2 "update db cfg for DBNAME using maxappls automatic"

Parameters for the JCR database

Table 6 shows the database parameters you should set for the JCR database:

Table 6. Database parameters for the JCR database
Parameter nameValue
DBHEAP4800
SORTHEAP4096
APPLHEAPSZ16384
APP_CTL_HEAP_SZ20000
STMTHEAP16384
NUM_IOCLEANERS11
NUM_IOSERVERS11

As the instance user, enter the following commands to update the database configuration specifc for JCRDB. Change JCRDB to the actual database name:

db2 "update db cfg for JCRDB using dbheap 4800"
db2 "update db cfg for JCRDB using sortheap 4096"
db2 "update db cfg for JCRDB using applheapsz 16384"
db2 "update db cfg for JCRDB using app_ctl_heap_sz 20000"
db2 "update db cfg for JCRDB using stmtheap 16384"
db2 "update db cfg for JCRDB using num_iocleaners 11"
db2 "update db cfg for JCRDB using num_ioservers 11"

Database tuning

Database performance is very important for obtaining good performance from WCM. The maintenance tasks and practices mentioned here and in the IBM WebSphere Portal Version 6.0 Tuning Guide were found to be critical to the performance and correct operation of WebSphere Portal in our lab environment. Additional database maintenance and tuning may be needed in your production environments. For further information on DB2 administration, tuning, and monitoring refer to the DB2 Information Center (see Related topics).

Collating sequence

DB2 offers a choice of collating sequences when creating databases. This choice can have a performance impact. While the use of UCA400_NO collation had virtually no effect on the throughput for the scenario described in this report, it yielded much higher database CPU costs. But in a separate investigative measurement, the use of the UCA400_NO collation had an obvious impact on some WCM authoring transactions. As a rule of thumb, the need for special locale-specific data ordering should be weighed against some possible higher database CPU cost. I did not specify any COLLATE option when I created the databases.

Changing JCR tables to be stable

The DB2 configuration of the JCR schema marks most of the tables as having VOLATILE CARDINALITY. This is true during the initial population since many tables grow from zero or a few rows to many rows. This attribute is an indication to the DB2 optimizer not to trust the table statistics indicating that the table is very small since the optimizer would normally choose scanning the table over utilizing an index for a small table. Once the database has reached a steady state, you want the optimizer to choose the best access plan according to the catalog statistics (see the following section for recommendations on how to maintain these statistics). In order to accomplish this, run the following commands:

db2 -x -r "nonVolatile.db2" "select rtrim(concat('alter table ', concat(rtrim(tabSchema),
concat('.', concat(rtrim(tabname), ' not volatile'))))) from syscat.tables where type='T'
and volatile='C' and tabSchema='JCR'"

db2 -v -f "nonVolatile.db2"

Ongoing database maintenance

Runstats and reorg

Two of the database attributes which DB2 relies upon to perform optimally are the database catalog statistics and the physical organization of the data in the tables. Catalog statistics should be recomputed periodically during the life of the database, particularly after periods of heavy data modifications (inserts, updates, and deletes) such as a population phase. Due to the heavy contention of computing these statistics, it is best to perform this maintenance during off hours, periods of low demand, or when the portal is off-line. The DB2 runstats command is used to count and record the statistical details about tables, indexes and columns. I have used two techniques in our environment to compute these statistics. The form I recommend is:

db2 "runstats on table tableschema.tablename on all columns with distribution
on all columns and sampled detailed indexes all allow write access"

These options allow the optimizer to determine optimal access plans for complex SQL. A simpler, more convenient technique for computing catalog statistics is:

db2 reorgchk update statistics on table all

Not only does this command count and record some of the same catalog statistics, it also produces a report that can be reviewed to identify table organization issues. However, I have found instances where this produces insufficient information for the optimizer to select an efficient access plan for complex SQL, particularly for queries of the JCR database. If you want a technique that has the same convenience of the reorgchk command and provides the detailed statistics preferred by the optimizer, use this command:

db2 -x -r "runstats.db2" "select rtrim(concat('runstats on table',
concat(rtrim(tabSchema), concat('.',concat(rtrim(tabname),
' on all columns with distribution on all columns and sampled detailed
indexes all allow write access'))))) from syscat.tables where type='T'"

db2 -v -f "runstats.db2"

Reorganizing all the tables would be over-kill in a production environment. To determine which tables might benefit from reorganization, use the command:

db2 reorgchk current statistics on table all > "reorgchk.txt"

The tables that need reorganization are indicated by a * in at least one of the three columns next to the table name. For those tables that require reorganization, use the following command:

db2 reorg table tableschema.tablename

Monitoring

Snapshot monitoring is used to identify the behavior of a database over a period of time. It is further used for fine-tuning the system and finding problems regarding performance.

For snapshot monitoring to work, you need to activate the different monitors first. There are two ways to do this. You can either configure the database manager to activate monitoring at instance level, or you can turn on monitoring at a specific time for the current session.

To turn on default monitoring at instance activation or instance level, use the following command:

db2 update dbm cfg using DFT_MON on

where DFT_MON is one of the following values:

DFT_MON_BUFPOOL DFT_MON_LOCK DFT_MON_SORT DFT_MON_STMT DFT_MON_TABLE DFT_MON_TIMESTAMP DFT_MON_UOW

To turn on monitoring for the current session, use the command:

db2 update monitor switches using MON_SWITCH on

where MON_SWITCH is one of the following values:

Table 7. Monitor switches
MonitorValue
Buffer Pool Activity InformationBUFFERPOOL
Lock InformationLOCK
Sorting InformationSORT
SQL Statement InformationSTATEMENT
Table Activity InformationTABLE
Take Timestamp InformationTIMESTAMP
Unit of Work InformationUOW

Note: Since activated monitors increase the utilisation of the CPU, you should activate all monitors only when needed.

To get the currently activated monitor switches, use the following command:

db2 get monitor switches

To get a snapshot for a database, run the following command:

db2 get snapshot for all on DBNAME >snap.out

Buffer pool analysis

A buffer pool is memory used to cache table and index data pages as they are being read from disk or being modified. The buffer pool improves database system performance by allowing data to be accessed from memory instead of from disk. Because memory access is much faster than disk access, the less often the database manager needs to read from or write to a disk, the better the performance.

Since there is no SYSIBMADM.BP_HITRATIO table in DB2 v8, I've written two stored procedures for calculating the buffer pool hit ratio: bphr shows the bufferpool hit ratio of the actual database and bphr_all shows the bufferpool hit ratio of all active databases within the instance.

Invoke the stored procedure with this command:

db2 call bphr
db2 call bphr_all

They are installed (after connecting to a database) with the following command:

db2 -td@ -f bphr.db2
Listing 1. Code for the bufferpool hit ratio stored procedures
CREATE PROCEDURE bphr ()
SPECIFIC tessus_bphr LANGUAGE SQL DYNAMIC RESULT SETS 1

BEGIN
  DECLARE res CURSOR WITH RETURN FOR
WITH bp_snap (snapshot_timestamp, database, bufferpool, bp_hr, data_hr, 
              idx_hr, page_clean_ratio )
AS
(
  SELECT snapshot_timestamp, SUBSTR(db_name,1,16), SUBSTR(bp_name,1,32),
  CASE
    WHEN ((pool_data_p_reads > 0 OR pool_index_p_reads > 0) AND 
          (pool_data_l_reads > 0 OR pool_index_l_reads > 0))
      THEN
        DECIMAL( ((1-(double(pool_data_p_reads + pool_index_p_reads)/
        DOUBLE(pool_data_l_reads + pool_index_l_reads+1)) )*100.0),3,1 )
      ELSE
        NULL
      END CASE,
      CAST( (CAST( pool_data_l_reads - pool_data_p_reads
        AS DOUBLE)*100.0)/(pool_data_l_reads+1) AS DECIMAL(3,1)),
      CAST( (CAST( pool_index_l_reads - pool_index_p_reads 
        AS DOUBLE)*100.0)/(pool_index_l_reads+1) AS DECIMAL(3,1)),
      CAST( (CAST( pool_async_data_writes + pool_async_index_writes
        AS DOUBLE)*100.0)/(pool_data_writes+pool_index_writes+1) 
        AS DECIMAL(3,1))
  FROM TABLE(snapshot_bp('',-1)) AS BP
  ORDER BY 2,3
)
SELECT snapshot_timestamp, database, bufferpool, bp_hr, data_hr, idx_hr 
FROM bp_snap;

  OPEN res;
END@

CREATE PROCEDURE bphr_all ()
SPECIFIC tessus_bphr_all LANGUAGE SQL DYNAMIC RESULT SETS 1

BEGIN
  DECLARE res CURSOR WITH RETURN FOR
WITH bp_snap (snapshot_timestamp, database, bufferpool, bp_hr, data_hr, 
              idx_hr, page_clean_ratio )
AS
(
  SELECT snapshot_timestamp, SUBSTR(db_name,1,16), SUBSTR(bp_name,1,32),
  CASE
    WHEN ((pool_data_p_reads > 0 OR pool_index_p_reads > 0) AND 
          (pool_data_l_reads > 0 OR pool_index_l_reads > 0))
      THEN
        DECIMAL( ((1-(double(pool_data_p_reads + pool_index_p_reads)/
        DOUBLE(pool_data_l_reads + pool_index_l_reads+1)) )*100.0),3,1 )
      ELSE
        NULL
      END CASE,
      CAST( (CAST( pool_data_l_reads - pool_data_p_reads
        AS DOUBLE)*100.0)/(pool_data_l_reads+1) AS DECIMAL(3,1)),
      CAST( (CAST( pool_index_l_reads - pool_index_p_reads 
        AS DOUBLE)*100.0)/(pool_index_l_reads+1) AS DECIMAL(3,1)),
      CAST( (CAST( pool_async_data_writes + pool_async_index_writes
        AS DOUBLE)*100.0)/(pool_data_writes+pool_index_writes+1) 
        AS DECIMAL(3,1))
  FROM TABLE(snapshot_bp(CAST(NULL AS VARCHAR(128)),-1)) AS BP
  ORDER BY 2,3
)
SELECT snapshot_timestamp, database, bufferpool, bp_hr, data_hr, idx_hr 
FROM bp_snap;

  OPEN res;
END@

In DB2 9, you can either use the two stored procedures, or the following SQL statement to get the buffer pool hit ratio:

db2 "select snapshot_timestamp, substr(db_name,1,10) as dbname,
substr(bp_name,1,18) as bufferpool, total_hit_ratio_percent as total,
data_hit_ratio_percent as data, index_hit_ratio_percent as index
from sysibmadm.bp_hitratio"

An ideal buffer pool hit ratio is above 96%. It is worthwhile to increase the size of the buffer pool and see if the hit ratio increases. If the hit ratio remains low, you may need to redesign your logical layout of your table spaces and buffer pools.

The size of the buffer pool can be changed online by the following command:

db2 alter bufferpool BPNAME immediate size NUMBER_OF_PAGES

Directory server tuning

My measurements used IBM Tivoli Directory Server (ITDS) version 5.2 as the directory server. The configuration details are the same of the AIX ITDS V5.2 directory server configuration specified in the IBM WebSphere Portal Version 6.0 Tuning Guide.

WebSphere Portal Service properties

WebSphere Portal has a number of configurable services; each service has several parameters available to it. This section describes which services I tuned differently from those described in the IBM WebSphere Portal Version 6.0 Tuning Guide.

The only service I tuned differently was the Cache Manager Service. For this service, I accepted the defaults shipped with WebSphere Portal except for the changes listed in the following table:

Table 8. Cache Manager Service parameters
Cache nameAIX POWER4 WCM Rendering Scenario
com.ibm.wps.ac.ExplicitEntitlementsCache.ICM_CONTENT.size2000
com.ibm.wps.datastore.services.Identification.SerializedOidString.cache.size5000
com.ibm.wps.model.content.impl.ResourceCache.lifetime14400

Summary

This tutorial shows you the steps involved in tuning your WebSphere Portal WCM and DB2 environments. Hopefully you've seen the unique parts of the environment that require special consideration when tuning. This article showed you the importance of various registry variables and several database manager and database configuration parameters that should be set to specified values. Finally, you saw how to maintain the the DB2 system to keep it performing well as the system grows.


Downloadable resources


Related topics

static.content.url=http://www.ibm.com/developerworks/js/artrating/
SITE_ID=1
Zone=Information Management, WebSphere
ArticleID=290838
ArticleTitle=IBM WebSphere Portal Web Content Manager and DB2 Tuning Guide
publish-date=02212008