Tuning a database can be a complex task specifically when dealing with data at scale.
The information provided here is based on internal lab experiments. It is assumed that correctly sized hardware is used as tuning can only alleviate so much in a system where resource utilization bottlenecks exist.
Db2 tuning reference information
The Db2 documentation has a section on Performance Tuning
https://www.ibm.com/docs/en/db2/11.5?topic=fundamentals-performance-tuning
Limiting database instance memory
On a smaller monolithic system, and even larger scale deployments, it is good practice to set a limit on how much memory your database instance will consume. Operations Analytics Predictive Insights ships with Db2 10.5 Workgroup Server Edition (WSE 10.5). Db2 WSE 10.5 allows up to 128GB of memory to be used within the application when self tuning is enabled.
To limit the amount of memory your database instance will consume:
- As the database admin user, db2inst1 by default, run the following command to list DATABASE_MEMORY settings:
db2 connect to SCAPIDB
db2 get db cfg for SCAPIDB show detail| grep DATABASE_MEMORY
Where SCAPIDB is the database instance name for Operations Analytics Predictive Insights
- As the database admin user, db2inst1 by default, run the following command to modify the DATABASE_MEMORY settings:
db2 connect to SCAPIDB
db2 update db cfg for SCAPIDB using DATABASE_MEMORY 3145728
Where SCAPIDB is the database instance name for Operations Analytics Predictive Insights
Note: the Operations Analytics Predictive Insights database uses a 4KB page size, so to set a 12GB limit, the value of 3145728 * 4k pages equates to 12GB.
It might take some trial and error to optimise the DATABASE_MEMORY setting. If you are experiencing memory issues, try adjusting the DATABASE_MEMORY setting and set the INSTANCE_MEMORY value to be slightly higher than the DATABASE_MEMORY value.
Ensure automatic memory is configured
Enable or set the following database configuration parameter settings to Automatic:
SELF_TUNING_MEM on
LOCKLIST AUTOMATIC
MAXLOCKS AUTOMATIC
PCKCACHESZ AUTOMATIC
SHEAPTHRES_SHR AUTOMATIC
SORTHEAP AUTOMATIC
To configure these parameter settings:
- As the database admin user, db2inst1 by default, run the following commands to list all database instance parameters:
db2 connect to SCAPIDB
db2 get db cfg for SCAPIDB show detail
Where SCAPIDB is the database instance name for Operations Analytics Predictive Insights.
- As the database admin user, db2inst1 by default, run the following commands to set your database instance parameters:
db2 update db cfg for SCAPIDB using SELF_TUNING_MEM on
db2 update db cfg for SCAPIDB using LOCKLIST AUTOMATIC
db2 update db cfg for SCAPIDB using MAXLOCKS AUTOMATIC
db2 update db cfg for SCAPIDB using PCKCACHESZ AUTOMATIC
db2 update db cfg for SCAPIDB using SHEAPTHRES_SHR AUTOMATIC
db2 update db cfg for SCAPIDB using SORTHEAP AUTOMATIC
Where SCAPIDB is the database instance name for Operations Analytics Predictive Insights.
Ensure your Db2 connection pool is sized correctly
To ensure your database connection pool is correctly sized, as the database admin user, db2inst1 by default, run the following command to increase the connection pool setting:
db2 connect to SCAPIDB
db2 update db cfg for SCAPIDB using MAXAPPLS 100
Note a setting of 100 is the minimum recommended setting. You may need to increase this value.
Ensure your buffer pool memory is set to automatic
To ensure your buffer pool is correctly configured do the following:
- As the database admin user, db2inst1 by default, run the following command to list all your buffer pool settings:
db2 “select bpname,pagesize,npages from syscat.bufferpools”
Note: If NPAGES column is set to -2 then automatic memory setting is enabled.
- As database admin user (db2inst1 by default) change your buffer pool settings to automactic by doing the following:
db2 connect to SCAPIDB
db2 alter bufferpool IBMDEFAULTBP size AUTOMATIC
Where SCAPIDB is the database instance name for Operations Analytics Predictive Insights and IBMDEFAULTBP is the bufferpool (BPNAME) name.
Db2 Transaction logs
The following are recommendations in relation to transaction logs:
- Allocate transaction logs their own file system and disk array.
- Do not place your transaction logs on a shared file system with your database instance files.
- Allow a minimum of 10GB of free space for your transaction logs.
Sizing Transaction logs
You need to size transaction logs to avoid errors and possible performance impacts.
To display and change transaction log settings, as the database admin user, db2inst1 by default, do the following:
- To display current settings :
db2 connect to SCAPIDB
db2 get db cfg for SCAPIDB | egrep “LOGBUFSZ|LOGFILSIZ|LOGPRIMARY|LOGSECOND”
- Change the transaction log settings as follows:
db2 update db cfg for SCAPIDB using logbufsz 256
db2 update db cfg for SCAPIDB using logprimary 13
db2 update db cfg for SCAPIDB using logsecond 50
db2 update db cfg for SCAPIDB using logfilsiz 40000
Note: The above numbers are recommended as a starting point. Periodic system audits and analysis are recommended to ensure optimal performance and identify any future growth needs.
- Run the following commands to restart the database:
db2 connect to SCAPIDB
db2 force applications all
db2stop
db2start
Note, it may take a few minutes to see the transaction logs in the NEWLOGPATH location.
Moving Transaction logs
To move transaction logs to a new location, do the following:
-
As the database admin user, db2inst1 by default, run the following commands to identify the current location of the transaction logs:
- db2 connect to SCAPIDB
db2 get db cfg for SCAPIDB | egrep “NEWLOGPATH”
- To change the "NEWLOGPATH" location, run the following command:
db2 connect to SCAPIDB
db2 update database configuration for SCAPIDB using NEWLOGPATH /%TRANS_LOGS
Where %TRANS_LOGS is the location to which you want to move the transactions logs
- To restart the database, run the following commands
db2 connect to SCAPIDB
db2 force applications all
db2stop
db2start
- To verify the changes, run the following commands:
db2 connect to SCAPIDB
db2 get db cfg for SCAPIDB | grep “Path to log files”
here SCAPIDB is the database instance name for Operations Analytics Predictive Insights
Tablespaces
The Operations Analytics Predictive Insights database component deploys 4 tablespaces (TSAA_SMALL, TSAA_LARGE and TSAA_HUGE, TSAA_RDF). The following is recommended for production deployments:
- Spread tablespaces across multiple disk arrays for optimal performance.
- Use Disk RAID level 10 for all database file systems.
- Use a disk storage group for database file systems.
Modifying storage group settings
To setup and configure a disk storage group when using multiple filesystems:
- As the database admin user, db2inst1 by default, run the following command to display the current table spaces and the number of containers:
db2 connect to SCAPIDB
db2 list tablespaces show detail | egrep “Name|Tablespace ID|Number of containers”
- To list details of current storage group config, run the following command:
db2 list tablespace containers for 5 show detail — Where 5 is the "Number of containers" from above.
- To add filesystem to storage group, run the following command:
db2 ALTER STOGROUP IBMSTOGROUP ADD ‘/home/db2data1′,’/home/db2data2′,’/home/db2data3’
Where /home/db2data1-3 are filesystems you wish to add to your storage group.
db2 ALTER TABLESPACE %tablespace_name REBALANCE
Where %tablespace_name is the tablespace you have added a filesystem to.
- To restart the database, run the following commands:
db2 connect to SCAPIDBdb2 force applications all
db2stop
db2start
Note, it will take some time for rebalance to fully complete post stop/start assuming you have data already in your database. You can processed as normal but performance may be impacted while this is carried out.
- Run the following command:
ALTER STOGROUP IBMSTOGROUP DROP ‘/home/db2inst1Where /home/db2inst1 is the filesystems you which to delete from your storage group.
- To restart the database, run the following commands:
db2 connect to SCAPIDBdb2 force applications all
db2stop
db2start
Deleteing a storage group entry
To delete a storage group entry: