There is already good amount of information on DB2 information center. This is an attempt to consolidate all those good links and information together :-)
For DB2 information center enthusiasts , here is how you can find information about performance tuning in DB2 v10.5 information center link.
Searching for 'Pereformance Tuning' in Search box will give you multiple hits. It will be good idea to go through some of the search items with headings ,
Establishing a performance tuning strategy
Best practices for transactional environments: performance and tuning (This topic in itself has pointers to 2 developerWorks articles :
Tuning and Monitoring Database System Performance - https://www.ibm.com/developerworks/community/wikis/home?lang=en#!/wiki/Wc9a068d7f6a6_4434_aece_0d297ea80ab1/page/Tuning%20and%20Monitoring%20Database%20System%20Performance
Writing and Tuning Queries for Optimal Performance - https://www.ibm.com/developerworks/community/wikis/home?lang=en#!/wiki/Wc9a068d7f6a6_4434_aece_0d297ea80ab1/page/Writing%20and%20Tuning%20Queries%20for%20Optimal%20Performance
For DB2 PureScale specific tuning , here is the link of webcast and slides for reference :
DB2 pureScale performance and monitoring - https://www.ibm.com/developerworks/community/wikis/home?lang=en#!/wiki/Wc9a068d7f6a6_4434_aece_0d297ea80ab1/page/DB2%20pureScale%20performance%20and%20monitoring
DB purescale :
For DB2 pureScale implementation , it would be good idea to follow the 'DB2 pureScale performance and monitoring' webcast link given above. Giving here some of the configuration parameters that need tuning from this webcast :
Set CF_GBP_SZ = 35 to 40 % of (sum of LBP across members)
Set CF_DB_MEM_SZ =25% bigger than CF_GBP_SZ
Set LOCKLIST to 6% or more of LBP size (per member)
Use UPDATE DATABASE CONFIGURATION command to achieve this. e.g.
cf_gbp_sz - Group buffer pool configuration parameter
This parameter determines the memory size used by the cluster caching facility, also known as CF, for group buffer pool (GBP) usage for this database.
If CF_GBP_SZ is 1000000 then , set CF_DB_MEM_SZ to 1250000
db2 “connect to SAMPLE”
db2 “update database configuration for SAMPLE using CF_DB_MEM_SZ 1250000
You can always get all the database configuration parameters by running following :
db2 “GET DATABASE CFG FOR SAMPLE SHOW DETAIL”
A snapshot of the output :
Database Configuration for Database sample
Database configuration release level = 0x1000
Database release level = 0x1000
Database territory = US
Database code page = 1208
Database code set = UTF-8
Database country/region code = 1
Database collating sequence = IDENTITY
Alternate collating sequence (ALT_COLLATE) =
Number compatibility = OFF
Varchar2 compatibility = OFF
Date compatibility = OFF
Database page size = 8192
Statement concentrator (STMT_CONC) = OFF
Discovery support for this database (DISCOVER_DB) = ENABLE
Restrict access = NO
Default query optimization class (DFT_QUERYOPT) = 5
Degree of parallelism (DFT_DEGREE) = 1
Auto-Revalidation (AUTO_REVAL) = DEFERRED
CF Resource Configuration:
CF database memory size (4KB) (CF_DB_MEM_SZ) = AUTOMATIC(126464)
Group buffer pool size (4KB) (CF_GBP_SZ) = AUTOMATIC(78336)
Directory-Entry-to-Data-Area ratio (CF_DEDA_RATIO) = AUTOMATIC(4)
Global lock memory size (4KB) (CF_LOCK_SZ) = AUTOMATIC(19200)
Shared communication area size (4KB) (CF_SCA_SZ) = AUTOMATIC(26112)
Database specific Tuning :
For database specific tuning , refer to the link given related to “Tuning and Monitoring Database System Performance”. Some of the points highlighted here :
The AUTOCONFIGURE command can provide real improvements over the default configuration settings, and is recommended as a way to obtain initial configuration values. Some additional fine-tuning of the recommendations generated by the AUTOCONFIGURE command is often required, based on the characteristics of the system.
Invoke autoconfigure on a database through the ADMIN_CMD stored procedure.
CALL SYSPROC.ADMIN_CMD( 'AUTOCONFIGURE APPLY NONE' )
The following is an example of the result set returned by the command.
LEVEL NAME VALUE RECOMMENDED_VALUE DATATYPE
-----...- ----------------...- -------... - ------------------ --------...-
DBM ASLHEAPSZ 15 15 BIGINT
DBM FCM_NUM_BUFFERS 512 512 BIGINT
DB APP_CTL_HEAP_SZ 128 144 INTEGER
DB APPGROUP_MEM_SZ 20000 14559 BIGINT
BP IBMDEFAULTBP 1000 164182 BIGINT
Here are some suggestions for using the AUTOCONFIGURE command:
1) Even though the AUTOCONFIGURE command is run automatically at database creation time since DB2 v9.1, it is still a good idea to run the AUTOCONFIGURE command explicitly. This is because you then have the ability to specify keyword/value pairs that help customize the results for your system.
2) Run (or re-run) the AUTOCONFIGURE command after the database is populated. This provides the tool with more information about the nature of the database. Ideally, ‘populated’ means with the amount of active data that you use (which affects buffer pool size calculations, for example). Significantly too much or too little data makes these calculations less accurate.
3) Try different values for important AUTOCONFIGURE command keywords, such as mem_percent, tpm, and num_stmts to get an idea of which, and to what degree, configuration values are affected by these changes.
4) If you are experimenting with different keywords and values, use the apply none option. This gives you a chance to compare the recommendations with the current settings.
5) Specify values for all keywords, because the defaults might not suit your system. For example, mem_percent defaults to 25%, which is too low for a dedicated DB2 server; 85% is the recommended value in this case.
For most systems, automatic settings provide better performance than all but the most carefully hand-tuned systems. This is particularly due to the DB2 self-tuning memory manager (STMM), which dynamically tunes total database memory allocation as well as four of the main shared memory consumers in a DB2 system: the buffer pools, the lock list, the package cache, and the sort heap.
Explicit configuration settings
Some parameters do not have automatic settings, and are not set by the configuration advisor. These need to be dealt with explicitly. We only consider parameters that have performance implications.
logpath or newlogpath determine the location of the transaction log. Even the configuration advisor cannot decide for you where the logs should go. As mentioned above, the most important point, from a performance perspective, is that they should not share disk devices with other DB2 objects, such as table spaces, or be allowed to remain in the default location, which is under the database path. Where possible, transaction logs should ideally be placed on dedicated storage with sufficient throughput capacity to ensure that a bottleneck won’t be created.
logbufsz determines the size of the transaction logger internal buffer, in 4KB pages. The default value of 256 pages is too small for good performance in many production environments. The configuration advisor always increases it, but possibly not enough, depending on the input parameters. A value of around 1024 pages is a good general range, and represents only a very small total amount of memory in the overall scheme of a database server.
buffpage determines the number of pages allocated to each buffer pool that is defined with a size of -1. The best practice is to ignore buffpage, and either explicitly set the size of buffer pools that have an entry in SYSCAT.BUFFERPOOLS, or let the STMM tune buffer pool sizes automatically.
logfilsiz determines the size of each transaction log file on disk. The log files should be sized so that log switches do not occur more frequently than every few minutes at their fastest. The rate of change of active log numbers can be obtained by querying MON_GET_TRANSACTION_LOG. If a larger logfilsiz causes an unacceptably long time between log switches during quieter times, you may choose to schedule some manual switches to trigger archiving.
diagpath determines the location of various useful DB2 diagnostic files. It generally has little impact on performance, except possibly in partitioned or clustered database environments. The default location of diagpath on all partitions is typically on a shared, NFS or GPFS-mounted path. The best practice is to override diagpath to a local, non-shared directory for each partition. This prevents all partitions from trying to update the same file with diagnostic messages. Instead, these are kept local to each partition, and contention is greatly reduced.
Hope this helps..