Question & Answer
Answers to frequently asked questions on DB2 LUW memory usage and configuration
Please see the following FAQs for content specific to those areas :
DB2 STMM FAQ - for questions specific to DB2's Self Tuning Memory Manager feature :
Q1: Why is instance memory usage - as displayed by db2pd -dbptnmem - so different than operating system reports on DB2's memory usage ?
Instance memory is used to track the configuration sizes of the underlying DB2 memory areas and so limit the combined overall potential memory footprint. Typically the configured sizes of those memory areas are not fully "committed" or "referenced", and so will not contribute to DB2's system memory footprint. For more details see the following technote : http://www-01.ibm.com/support/docview.wss?uid=swg21984813
Q2: If INSTANCE_MEMORY is set to AUTOMATIC, will DB2 sense available system memory and avoid over-committing the system ?
Only if the Self Tuning Memory Manager (STMM) is enabled. The AUTOMATIC Instance Memory setting doesn't do much by itself - no memory limit is imposed except through licensing. What the AUTOMATIC setting does do is indicate to STMM that it should sense "available system memory" and tune the database configuration parameters under its control accordingly.
Q3: Why is instance memory usage higher than the instance memory limit when INSTANCE_MEMORY is set to AUTOMATIC ?
When INSTANCE_MEMORY is set to AUTOMATIC and there is no memory limit imposed by a product license, the calculated Instance Memory setting has no effect, i.e. it is not enforced as a limit and does not guide memory usage. In order to understand what is driving instance memory usage, the highest consumers must be examined. Instance memory consumption is driven by both configuration (resulting in allowances which may not be used/committed) and real memory requirements (committed system memory). These can be understood by examining the actual configuration (database, instance, and bufferpool configurations) and available memory monitoring tools (MON_GET_MEMORY_SET/POOL, db2pd -inst -alldbs -dbptnmem -memsets -mempools).
Q4: Should I configure DB2 to use large or huge pages for database memory (on any platform) ?
Large or huge pages - the specifics depend on the OS/hardware platform - are supported on DB2 for the database shared memory area. However, there are some guidelines and restrictions :
- Since DB2 manages memory in 64K units, the decommitting of memory will no longer be available. in turn, STMM will no longer tune overall database_memory as it needs the capability to both increase and decrease memory usage to behave responsibly and robustly. STMM will still tune the inner STMM-capable areas (bufferpool, sort, etc.)
- It is recommended to use a fixed database_memory size so that it is well-established what the huge memory footprint will be on a consistent basis. In turn, the database memory size must be adequate to accommodate the internal configured areas plus overflow, and the size (and corresponding huge pages configuration) should not be so large as to overly constrain remaining available system memory.
See the DB2 documentation dealing with DATABASE_MEMORY and DB2_LARGE_PAGE_MEM for more information : (temporarily http://www.ibm.com/support/docview.wss?uid=swg21986164)
Q5: Should I disable Transparent Huge Pages (THP) on Linux ?
Transparent huge pages, or THP, is a performance feature enabled by default on RedHat Linux. However, many application vendors recommend disabling this feature due to having encountered performance-related issues. To date, only rare cases of performance degradation on systems running DB2 LUW have been diagnosed as attributable to THP, so there is currently no recommendation to disable THP on DB2 LUW servers. At the same time, most of a DB2 server's memory is allocated as shared memory, whereas THP is used solely for private/anonymous memory, so there is little to be gained from the enablement of THP. As such, there are no concerns with disabling THP due to recommendations from other vendors or company IT policies. Also, if a system is encountering erratic and severe overall degradation, especially involving high system CPU, disabling THP is worth attempting, as an accurate diagnosis is an involved process. Please see the following additional information :
Q6: I see that DB2 version 10.5 has changed the default for DB_MEM_THRESH to 100. Should I be worried about DB2 over-committing memory on my system ?
A DB_MEM_THRESH setting of 100 means that DB2 will retain all committed memory *within* the database memory size for reuse. This setting generally improves performance, reduces fragmentation of DB2's database memory area (potentially avoiding even higher peak requirements), and improves system stability (less volatility in DB2's memory demands reduces pressures on a system's virtual memory manager). STMM will still keep committed memory in check through dynamic database configuration changes. Any configuration decrease will trigger decommitment of memory as needed in order to keep the memory footprint within the newly established database memory size.
Q7: I notice that the virtual size of the db2sysc process seems excessively large, even larger than the size of RAM. Should I be concerned ?
The virtual size of a process is quite different than its actual system memory footprint (active/referenced/committed memory), so typically this is not a cause for concern. The virtual size of a process is not the same as virtual system memory, which is typically meant to includes paging space or swap. The operating system reserves address space in a process for many purposes which will never use the entire range, and those unused ranges typically do not require backing system memory (RAM or paging space/swap). For example, a DB2 instance will have a 4MB (by default) stack range reserved for each db2 agent thread, but only a fraction of this space is ever used/backed by system memory, and this is common in multi-threaded programs. Virtual size statistics can show up in a variety of system reporting tools. It doesn't mean that a very large virtual size is never associated with a problem, its just that resident system memory (RSS) or backing paging space/swap consumption are the real indicators of system resource usage and the areas to focus on.
Q8: What should I set vm.swappiness to on Linux ?
The recommendation for DB2 LUW is to set vm.swappiness to 5. Anything between 1 and 10 is acceptable. The former recommendation of 0 is acceptable for Linux kernels before 2.6.32, where the behaviour of this setting changed (it now guides the Linux virtual memory manager to kill applications before any swapping can occur). The default setting is 60, which is inappropriate for database servers. This higher setting will encourage the Linux vmm to page/swap out application memory in order to preserve a higher file cache, which generally degrades database server performance. A lower setting guides the Linux virtual memory manager to keep application memory (such as DB2 private/shared memory) in RAM, and more aggressively release/reclaim file cache when system memory becomes constrained.
Q9: Why is DB2 getting killed by the Linux OOM killer ?
A common cause for the DB2 database server process (db2sysc) to be killed by the Linux OOM killer is constrained system memory combined with a vm.swappiness setting of 0. Setting vm.swappiness to the recommended value of 5 will avoid this problem by allowing swapping to occur when system memory management is under pressure (generally due to some combination of high application memory and file cache requirements). See previous FAQ, "What should I set vm.swappiness to on Linux ?"
Q10: What does Cached represent in the output of Linux "free" command, and how does this tie in with DB2 ?
The "free" command's Cached value represents memory in use by the Linux page cache. The page cache is used for shared memory as well as file cache, so "Cached" is an aggregate amount representing very different types of memory usage. On its own, the Cached value is not very useful, and t is best to break it down into its various components by referring to /proc/meminfo: Shmem representing shared memory, and Active+Inactive(file) representing file cache. Shmem will map to DB2's shared memory allocations, so DB2 is directly controlling this portion. DB2 may also influence the makeup of the file cache portion through its I/O activity, but it has no control over its management. In general all operating systems use up most free memory to back clean file cache - clean meaning the file contents in memory are in sync with what is on disk. In this case the operating system can reclaim/re-purpose clean file cache nearly as cheaply as assigning completely free memory, and it is considered part of "Available" memory.
Q11: If I add RAM to my server/host, do I need to recycle DB2 for it to detect that there is additional memory available ?
In general the answer is "No". The only common DB2 behaviour that is influenced by system memory availability is the configuration tuning performed by the STMM (self-tuning memory manager). STMM will tune based on the state of system memory only if both INSTANCE_MEMORY and DATABASE_MEMORY are set to AUTOMATIC. STMM queries the operating system to determine available system memory on an ongoing basis, so no recycle is needed when RAM is increased. However, if INSTANCE_MEMORY is set to a percentage of RAM (the resulting value of which may also guide STMM), the effective instance memory limit will not automatically be updated when RAM is increased. The effective instance memory limit will continue to be based on the system memory configuration at the time of db2start. It is possible to manually update the effective instance memory limit (based on the existing percentage setting) without a recycle by dynamically modifying INSTANCE_MEMORY to the same setting (db2 attach to <instance>; db2 update dbm cfg using INSTANCE_MEMORY <same percentage>).
16 June 2018