P
package_id - Package identifier monitor element
A unique identifier for the package.
Event Type | Logical Data Grouping | Monitor Switch |
---|---|---|
Unit of work | Reported in the package list. | Always collected |
Usage
The value of this element matches a value from column PKGID of view SYSCAT.PACKAGES.
package_elapsed_time - Package elapsed time monitor element
The elapsed time spent executing sections within the package. Value is in milliseconds.
Event Type | Logical Data Grouping | Monitor Switch |
---|---|---|
Unit of work | Reported in the package list. | Always collected |
package_list_count - Package list count monitor element
The number of entries that are present within the package listing for a particular unit of work
Event Type | Logical Data Grouping | Monitor Switch |
---|---|---|
Unit of work |
- |
Always collected |
package_list_exceeded - Package list exceeded monitor element
Indicates whether the number of packages used within the unit of work has exceeded the capacity of the package list. Possible values are YES and NO.
Event Type | Logical Data Grouping | Monitor Switch |
---|---|---|
Unit of work |
- |
Always collected |
package_list_size - Size of package list monitor element
The count of the number of package identifiers included in the package list.
Event Type | Logical Data Grouping | Monitor Switch |
---|---|---|
Unit of Work | uow |
package_name - Package name monitor element
The name of the package that contains the SQL statement.
Table Function | Monitor Element Collection Level |
---|---|
MON_GET_ACTIVITY table function - Return a list of activities | Always collected |
MON_GET_ACTIVITY_DETAILS table function - Get complete activity details (reported in DETAILS XML document) | Always collected |
MON_GET_PKG_CACHE_STMT table function - Get SQL statement activity metrics in the package cache | Always collected |
MON_GET_PKG_CACHE_STMT_DETAILS table function - get detailed metrics for package cache entries | Always collected |
MON_GET_ROUTINE_EXEC_LIST table function - get list of statements executed by routine | Always collected |
WLM_GET_WORKLOAD_OCCURRENCE_ACTIVITIES table function - return a list of activities | Always collected |
Snapshot Level | Logical Data Grouping | Monitor Switch |
---|---|---|
Application | stmt | Statement |
DCS Statement | dcs_stmt | Statement |
Event Type | Logical Data Grouping | Monitor Switch |
---|---|---|
Locking | - | Always collected |
Deadlocks with Details1 | event_detailed_dlconn | - |
Statements | event_stmt | - |
Activities | event_activitystmt | Always collected |
Package cache | - | Always collected |
- 1
- This event monitor has been deprecated. Its use is no longer recommended and might be removed in a future release. Use the CREATE EVENT MONITOR FOR LOCKING statement to monitor lock-related events, such as lock timeouts, lock waits, and deadlocks.
Usage
You may use this element to help identify the application program and the SQL statement that is executing.
package_schema - Package schema monitor element
The schema name of the package associated with an SQL statement.
Table Function | Monitor Element Collection Level |
---|---|
MON_GET_ACTIVITY table function - Return a list of activities | Always collected |
MON_GET_ACTIVITY_DETAILS table function - Get complete activity details (reported in DETAILS XML document) | Always collected |
MON_GET_PKG_CACHE_STMT table function - Get SQL statement activity metrics in the package cache | Always collected |
MON_GET_PKG_CACHE_STMT_DETAILS table function - get detailed metrics for package cache entries | Always collected |
MON_GET_ROUTINE_EXEC_LIST table function - get list of statements executed by routine | Always collected |
WLM_GET_WORKLOAD_OCCURRENCE_ACTIVITIES table function - return a list of activities | Always collected |
Event Type | Logical Data Grouping | Monitor Switch |
---|---|---|
Locking | - | Always collected |
Package cache | - | Always collected |
package_version_id - Package version monitor element
Table Function | Monitor Element Collection Level |
---|---|
MON_GET_ACTIVITY table function - Return a list of activities | Always collected |
MON_GET_ACTIVITY_DETAILS table function - Get complete activity details (reported in DETAILS XML document) | Always collected |
MON_GET_PKG_CACHE_STMT table function - Get SQL statement activity metrics in the package cache | Always collected |
MON_GET_PKG_CACHE_STMT_DETAILS table function - get detailed metrics for package cache entries | Always collected |
MON_GET_ROUTINE_EXEC_LIST table function - get list of statements executed by routine | Always collected |
WLM_GET_WORKLOAD_OCCURRENCE_ACTIVITIES table function - return a list of activities | Always collected |
Snapshot Level | Logical Data Grouping | Monitor Switch |
---|---|---|
Application | stmt | Statement |
Event Type | Logical Data Grouping | Monitor Switch |
---|---|---|
Locking | - | Always collected |
Statements | event_stmt | Always collected |
Activities | event_activitystmt | Always collected |
Package cache | - | Always collected |
Usage
Use this element to help identify the package and the SQL statement that is currently executing.
packet_receive_errors - Packet receive errors monitor element
Number of errors receiving packets since the network adapter started.
Table Function | Monitor Element Collection Level |
---|---|
ENV_GET_NETWORK_RESOURCES table function - Return network adapter information | Always collected |
packets_received - Packets received monitor element
Number of packets received since the network adapter started.
Table Function | Monitor Element Collection Level |
---|---|
ENV_GET_NETWORK_RESOURCES table function - Return network adapter information | Always collected |
packet_send_errors - Packet send errors monitor element
Number of errors sending packets since the network adapter started.
Table Function | Monitor Element Collection Level |
---|---|
ENV_GET_NETWORK_RESOURCES table function - Return network adapter information | Always collected |
packets_sent - Packets sent monitor element
Number of packets sent since the network adapter started.
Table Function | Monitor Element Collection Level |
---|---|
ENV_GET_NETWORK_RESOURCES table function - Return network adapter information | Always collected |
page_allocations - Page allocations monitor element
Number of pages that have been allocated to the index.
Table Function | Monitor Element Collection Level |
---|---|
MON_GET_INDEX table function - Get index metrics | Always collected |
page_reorgs - Page reorganizations monitor element
The number of page reorganizations executed for a table.
Table Function | Monitor Element Collection Level |
---|---|
MON_GET_TABLE table function - Get table metrics | Always collected |
Snapshot Level | Logical Data Grouping | Monitor Switch |
---|---|---|
Table | table | Basic |
Event Type | Logical Data Grouping | Monitor Switch |
---|---|---|
Tables | event_table | Always collected |
Usage
- When a new row is inserted
- When an existing row is updated, and the update results in an increased record size
Too many page reorganizations can result in less than optimal insert performance. You can use the REORG TABLE utility to reorganize a table and eliminate fragmentation. You can also use the APPEND parameter for the ALTER TABLE statement to indicate that all inserts are appended at the end of a table to avoid page reorganizations.
In situations where updates to rows causes the row length to increase, the page may have enough space to accommodate the new row, but a page reorg may be required to defragment that space. If the page does not have enough space for the new larger row, an overflow record is created causing overflow_accesses during reads. You can avoid both situations by using fixed length columns instead of varying length columns.
page_reclaims_x - Page reclaims exclusive access monitor element
The number of times a page related to the object was reclaimed by another member in the DB2 pureScale® instance before its planned release, where the member that reclaimed the page required exclusive access.
Table Function | Monitor Element Collection Level |
---|---|
MON_GET_PAGE_ACCESS_INFO table function - Get buffer pool page waiting information | Always collected |
Usage
Page reclaims related to internally maintained object space maps are counted separately.
page_reclaims_s - Page reclaims shared access monitor element
The number of times a page related to the object was reclaimed by another member in the DB2 pureScale instance before its planned release, where the member that reclaimed the page required shared access.
Table Function | Monitor Element Collection Level |
---|---|
MON_GET_PAGE_ACCESS_INFO table function - Get buffer pool page waiting information | Always collected |
Usage
Page reclaims related to internally maintained object space maps are counted separately.
page_reclaims_initiated_x - Page reclaims initiated exclusive access monitor element
The number of times a page accessed in exclusive mode caused the page to be reclaimed from another member. Page reclaims related to internally maintained object space maps are counted separately.
Table Function | Monitor Element Collection Level |
---|---|
MON_GET_PAGE_ACCESS_INFO table function - Get buffer pool page waiting information | Always collected |
page_reclaims_initiated_s - Page reclaims initiated shared access monitor element
The number of times a page accessed in shared mode caused the page to be reclaimed from another member. Page reclaims related to internally maintained object space maps are counted separately.
Table Function | Monitor Element Collection Level |
---|---|
MON_GET_PAGE_ACCESS_INFO table function - Get buffer pool page waiting information | Always collected |
pages_from_block_ios - Total number of pages read by block I/O monitor element
The total number of pages read by block I/O into the block area of the buffer pool.
Table Function | Monitor Element Collection Command and Level |
---|---|
MON_GET_BUFFERPOOL table function - Get buffer pool metrics | DATA OBJECT METRICS BASE |
MON_GET_CONTAINER table function - Get table space container metrics | DATA OBJECT METRICS BASE |
MON_GET_DATABASE table function - Get database level information | DATA OBJECT METRICS BASE |
MON_GET_DATABASE_DETAILS table function - Get database information metrics | DATA OBJECT METRICS BASE |
MON_GET_TABLESPACE table function - Get table space metrics | DATA OBJECT METRICS BASE |
Snapshot Level | Logical Data Grouping | Monitor Switch |
---|---|---|
Buffer Pool | bufferpool | Buffer Pool |
Usage
If block-based buffer pool is enabled, this element reports the total number of pages read by block I/O. Otherwise, this element returns 0.
To calculate the average number of pages sequentially prefetched per block-based I/O, divide the value of the pages_from_block_ios monitor element by the value of the block_ios monitor element. If this value is much less than the BLOCKSIZE option you have defined for the block-based buffer pool in the CREATE BUFFERPOOL or ALTER BUFFERPOOL statement, then block-based I/O is not being used to its full advantage. One possible cause for this is a mismatch between the extent size for the table space being sequentially prefetched and the block size of the block-based buffer pool.
pages_from_vectored_ios - Total number of pages read by vectored I/O monitor element
The total number of pages read by vectored I/O into the page area of the buffer pool.
Table Function | Monitor Element Collection Command and Level |
---|---|
MON_GET_BUFFERPOOL table function - Get buffer pool metrics | DATA OBJECT METRICS BASE |
MON_GET_CONTAINER table function - Get table space container metrics | DATA OBJECT METRICS BASE |
MON_GET_DATABASE table function - Get database level information | DATA OBJECT METRICS BASE |
MON_GET_DATABASE_DETAILS table function - Get database information metrics | DATA OBJECT METRICS BASE |
MON_GET_TABLESPACE table function - Get table space metrics | DATA OBJECT METRICS BASE |
Snapshot Level | Logical Data Grouping | Monitor Switch |
---|---|---|
Buffer Pool | bufferpool | Buffer Pool |
pages_merged - Pages merged monitor element
Number of index pages that have been merged.
Table Function | Monitor Element Collection Level |
---|---|
MON_GET_INDEX table function - Get index metrics | Always collected |
pages_read - Number of pages read monitor element
The number of pages (data, index, and XML) read in from the physical table space containers for regular and large table spaces.
Table Function | Monitor Element Collection Level |
---|---|
MON_GET_CONTAINER table function - Get table space container metrics | DATA OBJECT METRICS BASE |
Usage
pages_written - Number of pages written monitor element
The number of pages (data, index, and XML) physically written to the table space container.
Table Function | Monitor Element Collection Level |
---|---|
MON_GET_CONTAINER table function - Get table space container metrics | DATA OBJECT METRICS BASE |
Usage
parent_activity_id - Parent activity ID monitor element
The unique ID of the activity's parent activity within the parent activity's unit of work. If there is no parent activity, the value of this monitor element is NULL.
Table Function | Monitor Element Collection Command and Level |
---|---|
MON_GET_ACTIVITY table function - Return a list of activities | Always collected |
MON_GET_ACTIVITY_DETAILS table function - Get complete activity details (reported in DETAILS XML document) | Always collected |
MON_GET_AGENT table function - List agents, fenced mode processes, and system entities for the database | Always collected |
WLM_GET_SERVICE_CLASS_AGENTS table function - list agents running in a service class | Always collected |
WLM_GET_WORKLOAD_OCCURRENCE _ACTIVITIES table function - Return a list of activities | Always collected |
Event Type | Logical Data Grouping | Monitor Switch |
---|---|---|
Activities | event_activity | Always collected |
Usage
Use this element along with the parent_uow_id element and appl_id element to uniquely identify the parent activity of the activity described in this activity record.
parent_uow_id - Parent unit of work ID monitor element
The unique unit of work identifier within an application handle. The ID of the unit of work in which the activity's parent activity originates. If there is no parent activity, the value is NULL.
Table Function | Monitor Element Collection Command and Level |
---|---|
MON_GET_ACTIVITY table function - Return a list of activities | Always collected |
MON_GET_ACTIVITY_DETAILS table function - Get complete activity details (reported in DETAILS XML document) | Always collected |
MON_GET_AGENT table function - List agents, fenced mode processes, and system entities for the database | Always collected |
WLM_GET_SERVICE_CLASS_AGENTS table function - list agents running in a service class | Always collected |
WLM_GET_WORKLOAD_OCCURRENCE _ACTIVITIES table function - Return a list of activities | Always collected |
Event Type | Logical Data Grouping | Monitor Switch |
---|---|---|
Activities | event_activity | Always collected |
Usage
Use this element along with the parent_activity_id element and appl_id element to uniquely identify the parent activity of the activity described in this activity record.
partial_record - Partial Record monitor element
Indicates that an event monitor record is only a partial record.
Event Type | Logical Data Grouping | Monitor Switch |
---|---|---|
Database | event_db | - |
Tables | event_table | - |
Tablespaces | event_tablespace | - |
Bufferpools | event_bufferpool | - |
Connection | event_conn | - |
Statements | event_stmt | - |
Statements | event_subsection | - |
Transactions | event_xact | - |
Activities | event_activity | - |
Usage
Most event monitors do not output their results until database deactivation. You can use the FLUSH EVENT MONITOR <monitorName> statement to force monitor values to the event monitor output writer. This allows you to force event monitor records to the writer without needing to stop and restart the event monitor. This element indicates whether an event monitor record was the result of flush operation and so is a partial record.
Flushing an event monitor does not cause its values to be reset. This means that a complete event monitor record is still generated when the event monitor is triggered.
- 0
- The activity record was generated normally at the end of activity.
- 1
- The activity record was generated as a result of calling the WLM_CAPTURE_ACTIVITY_IN_PROGRESS stored procedure.
- 2
- Information is missing for this activity because not enough storage was available to create the records. Information may be missing from the event_activity, event_activitystmt, or event_activityvals records.
participant_no - Participant within Deadlock monitor element
A sequence number uniquely identifying this participant within this deadlock.
- Element identifier
- participant_no
- Element type
- information
Event Type | Logical Data Grouping | Monitor Switch |
---|---|---|
Deadlocks | event_dlconn | Always collected |
Deadlocks with Details | event_detailed_dlconn | Always collected |
- Usage
- Use this in your monitoring application to correlate deadlock connection event records with deadlock event records.
participant_no_holding_lk - Participant Holding a Lock on the Object Required by Application monitor element
The participant number of the application that is holding a lock on the object that this application is waiting to obtain.
- Element identifier
- participant_no_holding_lk
- Element type
- information
Event Type | Logical Data Grouping | Monitor Switch |
---|---|---|
Deadlocks | event_dlconn | Always collected |
Deadlocks with Details | event_detailed_dlconn | Always collected |
- Usage
- This element can help you determine which applications are in contention for resources.
participant_type - Participant type monitor element
The type of lock participant which can be either Requestor or Owner.
Event Type | Logical Data Grouping | Monitor Switch |
---|---|---|
Locking | lock_participants |
partition_key - Partitioning key monitor element
The partitioning key for the event monitor tables. This value is chosen so that each event monitor process inserts data into the database partition on which the process is running; that is, insert operations are performed locally on the database partition where the event monitor process is running.
Event Type | Logical Data Grouping | Monitor Element Collection Level |
---|---|---|
Threshold Violation | event_thresholdviolations | |
Threshold Violation | control | |
Statistics | event_qstats | |
Statistics | event_scstats | |
Statistics | event_histogrambin | |
Statistics | event_wcstats | |
Statistics | event_wlstats | |
Statistics | control | |
Locking | lock | |
Locking | lock_participants | |
Locking | lock_participant_activities | |
Locking | lock_activity_values | |
Locking | control | |
Package Cache | pkgcache_metrics | |
Package Cache | pkgcache_stmt_args | |
Package Cache | control | |
Unit of Work | uow | |
Unit of Work | uow_metrics | |
Unit of Work | uow_package_list | |
Unit of Work | uow_executable_list | |
Unit of Work | control | |
Activities | event_activity | |
Activities | event_activitystmt | |
Activities | event_activityvals | |
Activities | event_activitymetrics | |
Activities | control | |
Change History | changesummary dbdbmcfg ddlstmtexec evmonstart regvar txncompletion utillocation utilphase utilstart utilstop |
Always collected |
partition_number - Partition Number monitor element
This element is only used in the target SQL tables by write-to-table event monitors in a partitioned database environment or DB2 pureScale environment. This value indicates the number of the member where event monitor data is inserted.
Table Function | Monitor Element Collection Level |
---|---|
DB_MEMBERS table function | Always collected |
Event Type | Logical Data Grouping | Monitor Switch |
---|---|---|
Activities | event_activitymetrics | Always collected |
passthru_time - Pass-Through Time monitor element
Snapshot Level | Logical Data Grouping | Monitor Switch |
---|---|---|
Database | dbase_remote | Timestamp |
Application | appl_remote | Timestamp |
Usage
Use this element to determine how much actual time is spent at this data source processing statements in pass-through mode.
passthrus - Pass-Through monitor element
This element contains a count of the total number of SQL statements that the federated server has passed through directly to this data source on behalf of any application since the start of the federated server instance or the last reset of the database monitor counters, whichever is latest.
Snapshot Level | Logical Data Grouping | Monitor Switch |
---|---|---|
Database | dbase_remote | Basic |
Application | appl_remote | Basic |
- Usage
- Use this element to determine what percentage of your SQL statements can be handled natively by the federated server, and what percentage requires pass-through mode. If this value is high, you should determine the cause and investigate ways to better use the native support.
past_activities_wrapped - Past activities list wrapped monitor element
Event Type | Logical Data Grouping | Monitor Switch |
---|---|---|
Locking | lock_participants |
peer_wait_limit - Peer wait limit monitor element
Reflects peer wait limit configuration (set via the registry variable DB2_HADR_PEER_WAIT_LIMIT). Units are milliseconds.
Table function | Monitor element collection level |
---|---|
MON_GET_HADR table function - Returns high availability disaster recovery (HADR) monitoring information | Always collected |
peer_window - Peer window monitor element
The value of the HADR_PEER_WINDOW database configuration parameter. Units are milliseconds.
Table function | Monitor element collection level |
---|---|
MON_GET_HADR table function - Returns high availability disaster recovery (HADR) monitoring information | Always collected |
peer_window_end - Peer window end monitor element
End time of current peer window. The value is NULL if peer window is not enabled.
Table function | Monitor element collection level |
---|---|
MON_GET_HADR table function - Returns high availability disaster recovery (HADR) monitoring information | Always collected |
phase_start_event_id - Phase start event ID monitor element
The EVENT_ID of corresponding UTILPHASESTART.
Event Type | Logical Data Grouping | Monitor Switch |
---|---|---|
Change History | UTILPHASE | Always collected |
Usage
- If the event is the stopping of a utility phase or processing stage (UTILPHASESTOP), this is the EVENT_ID of the corresponding start of the utility phase (UTILPHASESTART), otherwise -1.
Use with the PHASE_START_EVENT_TIMESTAMP and member elements to associate the phase stop record with the corresponding start record.
phase_start_event_timestamp - Phase start event timestamp monitor element
Time of the corresponding UTILPHASESTART
Event Type | Logical Data Grouping | Monitor Switch |
---|---|---|
Change History | UTILPHASE | Always collected |
Usage
For the change history event monitor:- If the event is the stopping of a utility phase or processing stage (UTILPHASESTOP), this is the time of the corresponding start of the utility phase (UTILPHASESTART), otherwise empty.
piped_sorts_accepted - Piped Sorts Accepted monitor element
The number of piped sorts that have been accepted.
Snapshot Level | Logical Data Grouping | Monitor Switch |
---|---|---|
Database Manager | db2 | Basic |
- Usage
- Each active sort on the system allocates
memory, which may result in
sorting taking up too much of the available system memory. When the number of accepted piped sorts is low compared to the number requested, you can improve sort performance by adjusting one or both of the following configuration parameters:
- sortheap
- sheapthres
piped_sorts_requested - Piped Sorts Requested monitor element
The number of piped sorts that have been requested.
Snapshot Level | Logical Data Grouping | Monitor Switch |
---|---|---|
Database Manager | db2 | Basic |
- Usage
- Each active sort on the system allocates
memory, which may result in
sorting taking up too much of the available system memory.
The sort list heap (sortheap) and sort heap threshold (sheapthres) configuration parameters help to control the amount of memory used for sort operations. These parameters are also used to determine whether a sort will be piped.
Since piped sorts may reduce disk I/O, allowing more piped sorts can improve the performance of sort operations and possibly the performance of the overall system. A piped sort is not be accepted if the sort heap threshold will be exceeded when the sort heap is allocated for the sort. See piped_sorts_accepted for more information if you are experiencing piped sort rejections.
The SQL EXPLAIN output will show whether the optimizer requests a piped sort.
pkg_cache_inserts - Package cache inserts monitor element
The total number of times that a requested section was not available for use and had to be loaded into the package cache. This count includes any implicit prepares performed by the system.
Snapshot Level | Logical Data Grouping | Monitor Switch |
---|---|---|
Database | dbase | Basic |
Application | appl | Basic |
Event Type | Logical Data Grouping | Monitor Switch |
---|---|---|
Database | event_db | Always collected |
Connection | event_conn | Always collected |
Statistics | event_scstats (reported in the metrics document) | REQUEST METRICS BASE |
Statistics | event_wlstats (reported in the metrics document) | REQUEST METRICS BASE |
Unit of work |
Reported in the system_metrics document. |
REQUEST METRICS BASE |
Usage
1 - (Package Cache Inserts / Package Cache Lookups)
pkg_cache_lookups - Package cache lookups monitor element
Snapshot Level | Logical Data Grouping | Monitor Switch |
---|---|---|
Database | dbase | Basic |
Application | appl | Basic |
Event Type | Logical Data Grouping | Monitor Switch |
---|---|---|
Database | event_db | Always collected |
Connection | event_conn | Always collected |
Statistics | event_scstats (reported in the metrics document) | REQUEST METRICS BASE |
Statistics | event_wlstats (reported in the metrics document) | REQUEST METRICS BASE |
Unit of work |
Reported in the system_metrics document. |
REQUEST METRICS BASE |
Usage
1 - (Package Cache Inserts / Package Cache Lookups)
The package cache hit ratio tells you whether or not the package cache is being used effectively. If the hit ratio is high (more than 0.8), the cache is performing well. A smaller hit ratio may indicate that the package cache should be increased.
You will need to experiment with the size of the package cache to find the optimal number for the pckcachesz configuration parameter. For example, you might be able to use a smaller package cache size if there is no increase in the pkg_cache_inserts element when you decrease the size of the cache. Decreasing the package cache size frees up system resources for other work. It is also possible that you could improve overall system performance by increasing the size of the package cache if by doing so, you decrease the number of pkg_cache_inserts. This experimentation is best done under full workload conditions.
You can use this element with ddl_sql_stmts to determine whether or not the execution of DDL statements is impacting the performance of the package cache. Sections for dynamic SQL statements can become invalid when DDL statements are executed. Invalid sections are implicitly prepared by the system when next used. The execution of a DDL statement could invalidate a number of sections and the resulting additional processing time required when preparing those sections could significantly impact performance. In this case, the package cache hit ratio reflects the implicit recompilation of invalid sections. It does not reflect the insertion of new sections into the cache, so increasing the size of the package cache will not improve overall performance. You might find it less confusing to tune the cache for an application on its own before working in the full environment.
It is necessary to determine the role that DDL statements are playing in the value of the package cache hit ratio before deciding on what action to take. If DDL statements rarely occur, then cache performance may be improved by increasing its size. If DDL statements are frequent, then improvements may require that you limit the use of DDL statements (possibly to specific time periods).
The static_sql_stmts and dynamic_sql_stmts counts can be used to help provide information about the quantity and type of sections being cached.
pkg_cache_num_overflows - Package Cache Overflows monitor element
The number of times that the package cache overflowed the bounds of its allocated memory.
Table Function | Monitor Element Collection Level |
---|---|
MON_GET_DATABASE table function - Get database level information | Always collected |
MON_GET_DATABASE_DETAILS table function - Get database information metrics | Always collected |
Snapshot Level | Logical Data Grouping | Monitor Switch |
---|---|---|
Database | dbase | Basic |
Event Type | Logical Data Grouping | Monitor Switch |
---|---|---|
Database | event_db | Always collected |
Usage
Use this element with the pkg_cache_size_top monitor element to determine whether the size of the package cache needs to be increased to avoid overflowing.
pkg_cache_size_top - Package cache high watermark monitor element
The largest size reached by the package cache.
Snapshot Level | Logical Data Grouping | Monitor Switch |
---|---|---|
Database | dbase | Basic |
Event Type | Logical Data Grouping | Monitor Switch |
---|---|---|
Database | event_db | Always collected |
Usage
If the package cache overflowed, then this element contains the largest size reached by the package cache during the overflow.
Check the pkg_cache_num_overflows monitor element to determine if such a condition occurred.
maximum package cache size / 4096
Rounding the result up to a whole number, indicates the minimum
number of 4K pages required by the package cache to avoid overflow. planid - Query plan ID monitor element
The hash key value which identifies a query plan for a section.
The planid monitor element tracks important performance sensitive aspects of the access plan. Such aspects include the list and layout of access plan operators, identifiers of the objects that are being accessed, the number of each type of predicate for each operator, and performance sensitive operator arguments.
Table function | Monitor element collection level |
---|---|
MON_GET_ACTIVITY table function - Return a list of activities | Always collected |
MON_GET_ACTIVITY_DETAILS table function - Return information about an activity as an XML document | Always collected |
MON_GET_PKG_CACHE_STMT table function - Get package cache statement metrics | Always collected |
MON_GET_PKG_CACHE_STMT_DETAILS table function - Get package cache statement metrics as an XML document | Always collected |
WLM_GET_WORKLOAD_OCCURENCE_ACTIVITIES table function - Return a list of activities | Always collected |
Event type | Logical data grouping | Monitor switch |
---|---|---|
Activities | event_activitystmt | Always collected |
Package cache | pkgcache | Always collected |
Usage
Use this monitor element with the stmtid and semantic_env_id monitor elements to detect changes in access plan that might affect performance.
pool_async_col_gbp_indep_pages_found_in_lbp - Asynchronous group buffer pool column-organized independent data pages found in local buffer pool monitor element
The number of group buffer pool (GBP) independent column-organized pages found in a local buffer pool by a prefetcher.
Table function | Monitor element collection level |
---|---|
MON_GET_BUFFERPOOL table function - Get buffer pool metrics | DATA OBJECT METRICS BASE |
MON_GET_DATABASE table function - Get database level information | DATA OBJECT METRICS BASE |
MON_GET_DATABASE_DETAILS table function - Get database information metrics | DATA OBJECT METRICS BASE |
MON_GET_TABLESPACE table function - Get table space metrics | DATA OBJECT METRICS BASE |
pool_async_col_gbp_invalid_pages - Asynchronous group buffer pool invalid data pages monitor element
The number of times a column-organized page was attempted to be read from the group buffer pool by a prefetcher because the page was invalid in the local buffer pool.
Table function | Monitor element collection level |
---|---|
MON_GET_BUFFERPOOL table function - Get buffer pool metrics | DATA OBJECT METRICS BASE |
MON_GET_DATABASE table function - Get database level information | DATA OBJECT METRICS BASE |
MON_GET_DATABASE_DETAILS table function - Get database information metrics | DATA OBJECT METRICS BASE |
MON_GET_TABLESPACE table function - Get table space metrics | DATA OBJECT METRICS BASE |
pool_async_col_gbp_l_reads - Asynchronous group buffer pool column-organized logical reads monitor element
The number of times a Group Buffer Pool (GBP) dependent column-organized page was attempted to be read from the group buffer pool by a prefetcher because the page was either invalid or not present in the local buffer pool.
Table function | Monitor element collection level |
---|---|
MON_GET_BUFFERPOOL table function - Get buffer pool metrics | DATA OBJECT METRICS BASE |
MON_GET_DATABASE table function - Get database level information | DATA OBJECT METRICS BASE |
MON_GET_DATABASE_DETAILS table function - Get database information metrics | DATA OBJECT METRICS BASE |
MON_GET_TABLESPACE table function - Get table space metrics | DATA OBJECT METRICS BASE |
pool_async_col_gbp_p_reads - Asynchronous group buffer pool column-organized physical reads monitor element
The number of times a Group Buffer Pool (GBP) dependent column-organized page was read into the local buffer pool by a prefetcher from disk because it was not found in the GBP.
Table function | Monitor element collection level |
---|---|
MON_GET_BUFFERPOOL table function - Get buffer pool metrics | DATA OBJECT METRICS BASE |
MON_GET_DATABASE table function - Get database level information | DATA OBJECT METRICS BASE |
MON_GET_DATABASE_DETAILS table function - Get database information metrics | DATA OBJECT METRICS BASE |
MON_GET_TABLESPACE table function - Get table space metrics | DATA OBJECT METRICS BASE |
pool_async_col_lbp_pages_found - Asynchronous local buffer pool column-organized pages found monitor element
The number of times a data page was present in the local buffer pool when a prefetcher attempted to access it.
Table function | Monitor element collection level |
---|---|
MON_GET_BUFFERPOOL table function - Get buffer pool metrics | DATA OBJECT METRICS BASE |
MON_GET_DATABASE table function - Get database level information | DATA OBJECT METRICS BASE |
MON_GET_DATABASE_DETAILS table function - Get database information metrics | DATA OBJECT METRICS BASE |
MON_GET_TABLESPACE table function - Get table space metrics | DATA OBJECT METRICS BASE |
pool_async_col_read_reqs - Buffer pool asynchronous column-organized read requests monitor element
The number of asynchronous column-organized read requests by the prefetcher to the operating system.
Table function | Monitor element collection level |
---|---|
MON_GET_BUFFERPOOL table function - Get buffer pool metrics | DATA OBJECT METRICS BASE |
MON_GET_DATABASE table function - Get database level information | DATA OBJECT METRICS BASE |
MON_GET_DATABASE_DETAILS table function - Get database information metrics | DATA OBJECT METRICS BASE |
MON_GET_TABLESPACE table function - Get table space metrics | DATA OBJECT METRICS BASE |
Usage
pool_async_col_reads / pool_async_col_read_reqs
This average can help you determine the average read I/O size used by the prefetcher.
- when some pages of the extent are already in the buffer pool
- when exceeding operating system capabilities
- when the EXTENTSIZE option value is very large, such that doing a large I/O would be detrimental to overall performance
pool_async_col_reads - Buffer pool asynchronous column-organized reads monitor element
Indicates the number of column-organized pages read in from the table space containers (physical) by a prefetcher for all types of table spaces.
Table function | Monitor element collection level |
---|---|
MON_GET_BUFFERPOOL table function - Get buffer pool metrics | DATA OBJECT METRICS BASE |
MON_GET_DATABASE table function - Get database level information | DATA OBJECT METRICS BASE |
MON_GET_DATABASE_DETAILS table function - Get database information metrics | DATA OBJECT METRICS BASE |
MON_GET_TABLESPACE table function - Get table space metrics | DATA OBJECT METRICS BASE |
pool_async_col_writes - Buffer pool asynchronous column-organized writes monitor element
The number of times a buffer pool data page was physically written to disk by either an asynchronous page cleaner, or a prefetcher. A prefetcher might have written dirty pages to disk to make space for the pages being prefetched.
Table function | Monitor element collection level |
---|---|
MON_GET_BUFFERPOOL table function - Get buffer pool metrics | DATA OBJECT METRICS BASE |
MON_GET_DATABASE table function - Get database level information | DATA OBJECT METRICS BASE |
MON_GET_DATABASE_DETAILS table function - Get database information metrics | DATA OBJECT METRICS BASE |
MON_GET_TABLESPACE table function - Get table space metrics | DATA OBJECT METRICS BASE |
Usage
pool_col_writes - pool_async_col_writes
By comparing the ratio of asynchronous to synchronous writes, you can gain insight into how well the buffer pool page cleaners are performing. This ratio can be helpful when you are tuning the num_iocleaners configuration parameter.
pool_async_data_gbp_indep_pages_found_in_lbp - Group buffer pool independent data pages found by asynchronous EDUs in a local buffer pool monitor element monitor element
The number of group buffer pool (GBP) independent data pages found in a local buffer pool by asynchronous EDUs.
Table Function | Monitor Element Collection Level |
---|---|
MON_GET_BUFFERPOOL table function - Get buffer pool metrics | DATA OBJECT METRICS BASE |
MON_GET_DATABASE table function - Get database level information | DATA OBJECT METRICS BASE |
MON_GET_DATABASE_DETAILS table function - Get database information metrics | DATA OBJECT METRICS BASE |
MON_GET_TABLESPACE table function - Get table space metrics | DATA OBJECT METRICS BASE |
pool_async_data_gbp_invalid_pages - Asynchronous group buffer pool invalid data pages monitor element
The number of times a data page was attempted to be read from the group buffer pool by a prefetcher because the page was invalid in the local buffer pool. Outside of a DB2 pureScale environment, this value is null.
Table Function | Monitor Element Collection Level |
---|---|
MON_GET_BUFFERPOOL table function - Get buffer pool metrics | DATA OBJECT METRICS BASE |
MON_GET_DATABASE table function - Get database level information | DATA OBJECT METRICS BASE |
MON_GET_DATABASE_DETAILS table function - Get database information metrics | DATA OBJECT METRICS BASE |
MON_GET_TABLESPACE table function - Get table space metrics | DATA OBJECT METRICS BASE |
Usage
GBP = ( pool_async_data_gbp_l_reads - pool_async_data_gbp_p_reads ) / pool_async_data_gbp_l_reads
Buffer pool hit rates are important factors in the overall
performance of the IBM® DB2 pureScale Feature.
Using this formula can help you determine whether the group buffer
pool may be a limiting factor on your database's throughput. pool_async_data_gbp_l_reads - Asynchronous group buffer pool data logical reads monitor element
The number of times a Group Buffer Pool (GBP) dependent data page was attempted to be read from the group buffer pool by a prefetcher because the page was either invalid or not present in the local buffer pool. Outside of a DB2 pureScale environment, this value is null.
Table Function | Monitor Element Collection Level |
---|---|
MON_GET_BUFFERPOOL table function - Get buffer pool metrics | DATA OBJECT METRICS BASE |
MON_GET_DATABASE table function - Get database level information | Always collected |
MON_GET_DATABASE_DETAILS table function - Get database information metrics | Always collected |
MON_GET_TABLESPACE table function - Get table space metrics | DATA OBJECT METRICS BASE |
Usage
GBP = ( pool_async_data_gbp_l_reads - pool_async_data_gbp_p_reads ) / pool_async_data_gbp_l_reads
Buffer pool hit rates are important factors in the overall
performance of the IBM DB2 pureScale Feature.
Using this formula can help you determine whether the group buffer
pool may be a limiting factor on your database's throughput. pool_async_data_gbp_p_reads - Asynchronous group buffer pool data physical reads monitor element
The number of times a Group Buffer Pool (GBP) dependent data page was read into the local buffer pool by a prefetcher from disk because it was not found in the GBP. Outside of a DB2 pureScale environment, this value is null.
Table Function | Monitor Element Collection Level |
---|---|
MON_GET_BUFFERPOOL table function - Get buffer pool metrics | DATA OBJECT METRICS BASE |
MON_GET_DATABASE table function - Get database level information | DATA OBJECT METRICS BASE |
MON_GET_DATABASE_DETAILS table function - Get database information metrics | DATA OBJECT METRICS BASE |
MON_GET_TABLESPACE table function - Get table space metrics | DATA OBJECT METRICS BASE |
Usage
GBP = ( pool_async_data_gbp_l_reads - pool_async_data_gbp_p_reads ) / pool_async_data_gbp_l_reads
Buffer pool hit rates are important factors in the overall
performance of the IBM DB2 pureScale Feature.
Using this formula can help you determine whether the group buffer
pool may be a limiting factor on your database's throughput. pool_async_data_lbp_pages_found - Asynchronous local buffer pool data pages found monitor element
The number of times a data page was present in the local buffer pool when a prefetcher attempted to access it.
Table Function | Monitor Element Collection Level |
---|---|
MON_GET_BUFFERPOOL table function - Get buffer pool metrics | DATA OBJECT METRICS BASE |
MON_GET_DATABASE table function - Get database level information | DATA OBJECT METRICS BASE |
MON_GET_DATABASE_DETAILS table function - Get database information metrics | DATA OBJECT METRICS BASE |
MON_GET_TABLESPACE table function - Get table space metrics | DATA OBJECT METRICS BASE |
Usage
GBP = ( pool_async_data_gbp_l_reads - pool_async_data_gbp_p_reads ) / pool_async_data_gbp_l_reads
Buffer pool hit rates are important factors in the overall
performance of the IBM DB2 pureScale Feature.
Using this formula can help you determine whether the group buffer
pool may be a limiting factor on your database's throughput. pool_async_data_read_reqs - Buffer pool asynchronous read requests monitor element
The number of asynchronous read requests by the prefetcher to the operating system. These requests are typically large block I/Os of multiple pages.
Table Function | Monitor Element Collection Command and Level |
---|---|
MON_GET_BUFFERPOOL table function - Get buffer pool metrics | DATA OBJECT METRICS BASE |
MON_GET_DATABASE table function - Get database level information | DATA OBJECT METRICS BASE |
MON_GET_DATABASE_DETAILS table function - Get database information metrics | DATA OBJECT METRICS BASE |
MON_GET_TABLESPACE table function - Get table space metrics | DATA OBJECT METRICS BASE |
Snapshot Level | Logical Data Grouping | Monitor Switch |
---|---|---|
Database | dbase | Buffer Pool |
Table Space | tablespace | Buffer Pool |
Buffer Pool | bufferpool | Buffer Pool |
Event Type | Logical Data Grouping | Monitor Switch |
---|---|---|
Database | event_db | Always collected |
Tablespaces | event_tablespace | Always collected |
Usage
pool_async_data_reads / pool_async_data_read_reqs
This average can help you determine the average read I/O size used by the prefetcher. This data can also be helpful in understanding the large block I/O requirements of the measured workload.
- when some pages of the extent are already in the buffer pool
- when exceeding operating system capabilities
- when the EXTENTSIZE option value is very large, such that doing a large I/O would be detrimental to overall performance
pool_async_data_reads - Buffer pool asynchronous data reads monitor element
Indicates the number of data pages read in from the table space containers (physical) by asynchronous engine dispatchable units (EDUs) for all types of table spaces.
Table Function | Monitor Element Collection Level |
---|---|
MON_GET_BUFFERPOOL table function - Get buffer pool metrics | DATA OBJECT METRICS BASE |
MON_GET_DATABASE table function - Get database level information | DATA OBJECT METRICS BASE |
MON_GET_DATABASE_DETAILS table function - Get database information metrics | DATA OBJECT METRICS BASE |
MON_GET_TABLESPACE table function - Get table space metrics | DATA OBJECT METRICS BASE |
Snapshot Level | Logical Data Grouping | Monitor Switch |
---|---|---|
Database | dbase | Buffer Pool |
Table Space | tablespace | Buffer Pool |
Buffer Pool | bufferpool | Buffer Pool |
Event Type | Logical Data Grouping | Monitor Switch |
---|---|---|
Database | event_db | Always collected |
Tablespaces | event_tablespace | Always collected |
Usage
# of sync data reads = pool_data_p_reads + pool_temp_data_p_reads - pool_async_data_reads
(pool_async_data_reads + pool_async_index_reads)/
(pool_data_p_reads + pool_temp_data_p_reads + pool_index_p_reads + pool_temp_index_p_reads)
Asynchronous reads are performed by database manager prefetchers.
pool_async_data_writes - Buffer pool asynchronous data writes monitor element
The number of times a buffer pool data page was physically written to disk by either an asynchronous page cleaner, or a prefetcher. A prefetcher may have written dirty pages to disk to make space for the pages being prefetched.
Table Function | Monitor Element Collection Level |
---|---|
MON_GET_BUFFERPOOL table function - Get buffer pool metrics | DATA OBJECT METRICS BASE |
MON_GET_DATABASE table function - Get database level information | DATA OBJECT METRICS BASE |
MON_GET_DATABASE_DETAILS table function - Get database information metrics | DATA OBJECT METRICS BASE |
MON_GET_TABLESPACE table function - Get table space metrics | DATA OBJECT METRICS BASE |
Snapshot Level | Logical Data Grouping | Monitor Switch |
---|---|---|
Database | dbase | Buffer Pool |
Table Space | tablespace | Buffer Pool |
Buffer Pool | bufferpool | Buffer Pool |
Event Type | Logical Data Grouping | Monitor Switch |
---|---|---|
Database | event_db | Always collected |
Tablespaces | event_tablespace | Always collected |
- Usage
- You can use this element with the pool_data_writes monitor
element to calculate the number of physical write requests that were
performed synchronously (that is, physical data page writes that were
performed by database
manager agents).
Use the following formula:
pool_data_writes - pool_async_data_writes
By comparing the ratio of asynchronous to synchronous writes, you can gain insight into how well the buffer pool page cleaners are performing. This ratio can be helpful when you are tuning the num_iocleaners configuration parameter.
pool_async_index_gbp_indep_pages_found_in_lbp - Group buffer pool independent index pages found by asynchronous EDUs in a local buffer pool monitor element monitor element
The number of group buffer pool (GBP) independent index pages found in a local buffer pool by asynchronous EDUs.
Table Function | Monitor Element Collection Level |
---|---|
MON_GET_BUFFERPOOL table function - Get buffer pool metrics | DATA OBJECT METRICS BASE |
MON_GET_DATABASE table function - Get database level information | DATA OBJECT METRICS BASE |
MON_GET_DATABASE_DETAILS table function - Get database information metrics | DATA OBJECT METRICS BASE |
MON_GET_TABLESPACE table function - Get table space metrics | DATA OBJECT METRICS BASE |
pool_async_index_gbp_invalid_pages - Asynchronous group buffer pool invalid index pages monitor element
The number of times an index page was attempted to be read from the group buffer pool by a prefetcher because the page was invalid in the local buffer pool. Outside of a DB2 pureScale environment, this value is null.
Table Function | Monitor Element Collection Level |
---|---|
MON_GET_BUFFERPOOL table function - Get buffer pool metrics | DATA OBJECT METRICS BASE |
MON_GET_DATABASE table function - Get database level information | DATA OBJECT METRICS BASE |
MON_GET_DATABASE_DETAILS table function - Get database information metrics | DATA OBJECT METRICS BASE |
MON_GET_TABLESPACE table function - Get table space metrics | DATA OBJECT METRICS BASE |
Usage
GBP = ( pool_async_index_gbp_l_reads - pool_async_index_gbp_p_reads ) / pool_async_index_gbp_l_reads
Buffer pool hit rates are important factors in the overall
performance of the DB2 pureScale instance.
Using the previously mentioned formula can help you determine whether
the group buffer pool may be a limiting factor on your database's
throughput. pool_async_index_gbp_l_reads - Asynchronous group buffer pool index logical reads monitor element
The number of times a Group Buffer Pool (GBP) dependent index page was attempted to be read from the group buffer pool by a prefetcher because the page was either invalid or not present in the local buffer pool. Outside of a DB2 pureScale environment, this value is null.
Table Function | Monitor Element Collection Level |
---|---|
MON_GET_BUFFERPOOL table function - Get buffer pool metrics | DATA OBJECT METRICS BASE |
MON_GET_DATABASE table function - Get database level information | DATA OBJECT METRICS BASE |
MON_GET_DATABASE_DETAILS table function - Get database information metrics | DATA OBJECT METRICS BASE |
MON_GET_TABLESPACE table function - Get table space metrics | DATA OBJECT METRICS BASE |
Usage
GBP = ( pool_async_index_gbp_l_reads - pool_async_index_gbp_p_reads ) / pool_async_index_gbp_l_reads
Buffer pool hit rates are important factors in the overall
performance of the DB2 pureScale instance.
Using the previously mentioned formula can help you determine whether
the group buffer pool may be a limiting factor on your database's
throughput. pool_async_index_gbp_p_reads - Asynchronous group buffer pool index physical reads monitor element
The number of times a Group Buffer Pool (GBP) dependent index page was read into the local buffer pool by a prefetcher from disk because it was not found in the GBP. Outside of a DB2 pureScale environment, this value is null.
Table Function | Monitor Element Collection Level |
---|---|
MON_GET_BUFFERPOOL table function - Get buffer pool metrics | DATA OBJECT METRICS BASE |
MON_GET_DATABASE table function - Get database level information | DATA OBJECT METRICS BASE |
MON_GET_DATABASE_DETAILS table function - Get database information metrics | DATA OBJECT METRICS BASE |
MON_GET_TABLESPACE table function - Get table space metrics | DATA OBJECT METRICS BASE |
Usage
GBP = ( pool_async_index_gbp_l_reads - pool_async_index_gbp_p_reads ) / pool_async_index_gbp_l_reads
Buffer pool hit rates are important factors in the overall
performance of the DB2 pureScale instance.
Using the previously mentioned formula can help you determine whether
the group buffer pool may be a limiting factor on your database's
throughput. pool_async_index_lbp_pages_found - Asynchronous local buffer pool index pages found monitor element
The number of times an index page was present in the local buffer pool when a prefetcher attempted to access it.
Table Function | Monitor Element Collection Level |
---|---|
MON_GET_BUFFERPOOL table function - Get buffer pool metrics | DATA OBJECT METRICS BASE |
MON_GET_DATABASE table function - Get database level information | DATA OBJECT METRICS BASE |
MON_GET_DATABASE_DETAILS table function - Get database information metrics | DATA OBJECT METRICS BASE |
MON_GET_TABLESPACE table function - Get table space metrics | DATA OBJECT METRICS BASE |
Usage
GBP = ( pool_async_index_gbp_l_reads - pool_async_index_gbp_p_reads ) / pool_async_index_gbp_l_reads
Buffer pool hit rates are important factors in the overall
performance of the DB2 pureScale instance.
Using the previously mentioned formula can help you determine whether
the group buffer pool may be a limiting factor on your database's
throughput. pool_async_index_read_reqs - Buffer pool asynchronous index read requests monitor element
The number of asynchronous read requests for index pages.
Table Function | Monitor Element Collection Command and Level |
---|---|
MON_GET_BUFFERPOOL table function - Get buffer pool metrics | DATA OBJECT METRICS BASE |
MON_GET_DATABASE table function - Get database level information | DATA OBJECT METRICS BASE |
MON_GET_DATABASE_DETAILS table function - Get database information metrics | DATA OBJECT METRICS BASE |
MON_GET_TABLESPACE table function - Get table space metrics | DATA OBJECT METRICS BASE |
Snapshot Level | Logical Data Grouping | Monitor Switch |
---|---|---|
Database | dbase | Buffer Pool |
Table Space | tablespace | Buffer Pool |
Buffer Pool | bufferpool | Buffer Pool |
Event Type | Logical Data Grouping | Monitor Switch |
---|---|---|
Database | event_db | Always collected |
Tablespaces | event_tablespace | Always collected |
- Usage
- To calculate the number of index pages read per asynchronous
request, use the following formula:
This average can help you determine the amount of asynchronous I/O done for index pages in each interaction with the prefetcher.pool_async_index_reads / pool_async_index_read_reqs
pool_async_index_reads - Buffer pool asynchronous index reads monitor element
Indicates the number of index pages read in from the table space containers (physical) by asynchronous engine dispatchable units (EDUs) for all types of table spaces.
Table Function | Monitor Element Collection Level |
---|---|
MON_GET_BUFFERPOOL table function - Get buffer pool metrics | DATA OBJECT METRICS BASE |
MON_GET_DATABASE table function - Get database level information | DATA OBJECT METRICS BASE |
MON_GET_DATABASE_DETAILS table function - Get database information metrics | DATA OBJECT METRICS BASE |
MON_GET_TABLESPACE table function - Get table space metrics | DATA OBJECT METRICS BASE |
Snapshot Level | Logical Data Grouping | Monitor Switch |
---|---|---|
Database | dbase | Buffer Pool |
Table Space | tablespace | Buffer Pool |
Buffer Pool | bufferpool | Buffer Pool |
Event Type | Logical Data Grouping | Monitor Switch |
---|---|---|
Database | event_db | Always collected |
Tablespaces | event_tablespace | Always collected |
Usage
# of sync index reads = pool_index_p_reads + pool_temp_index_p_reads - pool_async_index_reads
(pool_async_index_reads) / (pool_index_p_reads + pool_temp_index_p_reads)
Asynchronous reads are performed by database manager prefetchers.
pool_async_index_writes - Buffer pool asynchronous index writes monitor element
The number of times a buffer pool index page was physically written to disk by either an asynchronous page cleaner, or a prefetcher. A prefetcher may have written dirty pages to disk to make space for the pages being prefetched.
Table Function | Monitor Element Collection Command and Level |
---|---|
MON_GET_BUFFERPOOL table function - Get buffer pool metrics | DATA OBJECT METRICS BASE |
MON_GET_DATABASE table function - Get database level information | DATA OBJECT METRICS BASE |
MON_GET_DATABASE_DETAILS table function - Get database information metrics | DATA OBJECT METRICS BASE |
MON_GET_TABLESPACE table function - Get table space metrics | DATA OBJECT METRICS BASE |
Snapshot Level | Logical Data Grouping | Monitor Switch |
---|---|---|
Database | dbase | Buffer Pool |
Table Space | tablespace | Buffer Pool |
Buffer Pool | bufferpool | Buffer Pool |
Event Type | Logical Data Grouping | Monitor Switch |
---|---|---|
Database | event_db | Always collected |
Tablespaces | event_tablespace | Always collected |
Usage
pool_index_writes - pool_async_index_writes
By comparing the ratio of asynchronous to synchronous writes, you can gain insight into how well the buffer pool page cleaners are performing. This ratio can be helpful when you are tuning the num_iocleaners configuration parameter.
pool_async_read_time - Buffer Pool Asynchronous Read Time monitor element
Indicates the total amount of time spent reading in data and index pages from the table space containers (physical) by asynchronous engine dispatchable units (EDUs) for all types of table spaces. This value is given in milliseconds.
Table Function | Monitor Element Collection Level |
---|---|
MON_GET_BUFFERPOOL table function - Get buffer pool metrics | DATA OBJECT METRICS BASE |
MON_GET_DATABASE table function - Get database level information | DATA OBJECT METRICS BASE |
MON_GET_DATABASE_DETAILS table function - Get database information metrics | DATA OBJECT METRICS BASE |
MON_GET_TABLESPACE table function - Get table space metrics | DATA OBJECT METRICS BASE |
Snapshot Level | Logical Data Grouping | Monitor Switch |
---|---|---|
Database | dbase | Buffer Pool |
Table Space | tablespace | Buffer Pool |
Buffer Pool | bufferpool | Buffer Pool |
Event Type | Logical Data Grouping | Monitor Switch |
---|---|---|
Database | event_db | Always collected |
Tablespaces | event_tablespace | Always collected |
- Usage
- You can use this element to calculate the elapsed time for synchronous
reading, using the following formula:
pool_read_time - pool_async_read_time
You can also use this element to calculate the average asynchronous read time using the following formula:pool_async_read_time / pool_async_data_reads
These calculations can be used to understand the I/O work being performed.
pool_async_write_time - Buffer pool asynchronous write time monitor element
Cumulative elapsed time for each asynchronous write to complete. This value is reported in milliseconds.
Table Function | Monitor Element Collection Level |
---|---|
MON_GET_BUFFERPOOL table function - Get buffer pool metrics | DATA OBJECT METRICS BASE |
MON_GET_DATABASE table function - Get database level information | DATA OBJECT METRICS BASE |
MON_GET_DATABASE_DETAILS table function - Get database information metrics | DATA OBJECT METRICS BASE |
MON_GET_TABLESPACE table function - Get table space metrics | DATA OBJECT METRICS BASE |
Snapshot Level | Logical Data Grouping | Monitor Switch |
---|---|---|
Database | dbase | Buffer Pool |
Table Space | tablespace | Buffer Pool |
Buffer Pool | bufferpool | Buffer Pool |
Event Type | Logical Data Grouping | Monitor Switch |
---|---|---|
Database | event_db | Always collected |
Tablespaces | event_tablespace | Always collected |
Usage
pool_write_time - pool_async_write_time
pool_async_write_time
/ (pool_async_data_writes
+ pool_async_index_writes)
These calculations can be used to understand the I/O work being performed.
pool_async_xda_gbp_indep_pages_found_in_lbp - Group buffer pool independent XML storage object(XDA) pages found by asynchronous EDUs in a local buffer pool monitor element monitor element
The number of group buffer pool (GBP) independent XML storage object (XDA) pages found in a local buffer pool by asynchronous EDUs.
Table Function | Monitor Element Collection Level |
---|---|
MON_GET_BUFFERPOOL table function - Get buffer pool metrics | DATA OBJECT METRICS BASE |
MON_GET_DATABASE table function - Get database level information | DATA OBJECT METRICS BASE |
MON_GET_DATABASE_DETAILS table function - Get database information metrics | DATA OBJECT METRICS BASE |
MON_GET_TABLESPACE table function - Get table space metrics | DATA OBJECT METRICS BASE |
pool_async_xda_gbp_invalid_pages - Asynchronous group buffer pool invalid XDA data pages monitor element
The number of times a request for a data page for an XML storage object (XDA) was made from the group buffer pool by a prefetcher due to the page being marked invalid in the local buffer pool. Outside of a DB2 pureScale environment, this value is null.
Table Function | Monitor Element Collection Level |
---|---|
MON_GET_BUFFERPOOL table function - Get buffer pool metrics | DATA OBJECT METRICS BASE |
MON_GET_DATABASE table function - Get database level information | DATA OBJECT METRICS BASE |
MON_GET_DATABASE_DETAILS table function - Get database information metrics | DATA OBJECT METRICS BASE |
MON_GET_TABLESPACE table function - Get table space metrics | DATA OBJECT METRICS BASE |
Usage
GBP = ( pool_async_xda_gbp_l_reads - pool_async_xda_gbp_p_reads ) / pool_async_xda_gbp_l_reads
Buffer pool hit rates are important factors in the overall
performance of the DB2 pureScale instance.
Using the previously mentioned formula can help you determine whether
the group buffer pool may be a limiting factor on your database's
throughput. pool_async_xda_gbp_l_reads - Group buffer pool XDA data asynchronous logical read requests monitor element
The number of times a GBP dependent data page for an XML storage object (XDA) was attempted to be read from the group buffer pool by a prefetcher because the page was either invalid or not present in the local buffer pool. Outside of a DB2 pureScale environment, this value is null.
Table Function | Monitor Element Collection Level |
---|---|
MON_GET_BUFFERPOOL table function - Get buffer pool metrics | DATA OBJECT METRICS BASE |
MON_GET_DATABASE table function - Get database level information | DATA OBJECT METRICS BASE |
MON_GET_DATABASE_DETAILS table function - Get database information metrics | DATA OBJECT METRICS BASE |
MON_GET_TABLESPACE table function - Get table space metrics | DATA OBJECT METRICS BASE |
Usage
GBP = ( pool_async_xda_gbp_l_reads - pool_async_xda_gbp_p_reads ) / pool_async_xda_gbp_l_reads
Buffer pool hit rates are important factors in the overall
performance of the DB2 pureScale instance.
Using the previously mentioned formula can help you determine whether
the group buffer pool may be a limiting factor on your database's
throughput. pool_async_xda_gbp_p_reads - Group buffer pool XDA data asynchronous physical read requests monitor element
The number of times a GBP dependent data page for an XML storage object (XDA) was read into the local buffer pool by a prefetcher from disk because it was not found in the GBP. Outside of a DB2 pureScale environment, this value is null.
Table Function | Monitor Element Collection Level |
---|---|
MON_GET_BUFFERPOOL table function - Get buffer pool metrics | DATA OBJECT METRICS BASE |
MON_GET_DATABASE table function - Get database level information | DATA OBJECT METRICS BASE |
MON_GET_DATABASE_DETAILS table function - Get database information metrics | DATA OBJECT METRICS BASE |
MON_GET_TABLESPACE table function - Get table space metrics | DATA OBJECT METRICS BASE |
Usage
GBP = ( pool_async_xda_gbp_l_reads - pool_async_xda_gbp_p_reads ) / pool_async_xda_gbp_l_reads
Buffer pool hit rates are important factors in the overall
performance of the DB2 pureScale instance.
Using the previously mentioned formula can help you determine whether
the group buffer pool may be a limiting factor on your database's
throughput. pool_async_xda_lbp_pages_found - Asynchronous local buffer pool XDA data pages found monitor element
The number of times a data page for an XML storage object (XDA) was requested by a prefetcher from and found in the local buffer pool.
Table Function | Monitor Element Collection Level |
---|---|
MON_GET_BUFFERPOOL table function - Get buffer pool metrics | DATA OBJECT METRICS BASE |
MON_GET_DATABASE table function - Get database level information | DATA OBJECT METRICS BASE |
MON_GET_DATABASE_DETAILS table function - Get database information metrics | DATA OBJECT METRICS BASE |
MON_GET_TABLESPACE table function - Get table space metrics | DATA OBJECT METRICS BASE |
Usage
GBP = ( pool_async_xda_gbp_l_reads - pool_async_xda_gbp_p_reads ) / pool_async_xda_gbp_l_reads
Buffer pool hit rates are important factors in the overall
performance of the DB2 pureScale instance.
Using the previously mentioned formula can help you determine whether
the group buffer pool may be a limiting factor on your database's
throughput. pool_async_xda_read_reqs - Buffer pool asynchronous XDA read requests monitor element
The number of asynchronous read requests for XML storage object (XDA) data.
Table Function | Monitor Element Collection Command and Level |
---|---|
MON_GET_BUFFERPOOL table function - Get buffer pool metrics | DATA OBJECT METRICS BASE |
MON_GET_DATABASE table function - Get database level information | DATA OBJECT METRICS BASE |
MON_GET_DATABASE_DETAILS table function - Get database information metrics | DATA OBJECT METRICS BASE |
MON_GET_TABLESPACE table function - Get table space metrics | DATA OBJECT METRICS BASE |
Snapshot Level | Logical Data Grouping | Monitor Switch |
---|---|---|
Database | dbase | Buffer Pool |
Table Space | tablespace | Buffer Pool |
Buffer Pool | bufferpool | Buffer Pool |
Event Type | Logical Data Grouping | Monitor Switch |
---|---|---|
Database | event_db | Always collected |
Tablespaces | event_tablespace | Always collected |
- Usage
- To calculate the average number of XML storage object data pages
read per asynchronous request, use the following formula:
pool_async_xda_reads / pool_async_xda_read_reqs
This average can help you determine the amount of asynchronous I/O done in each interaction with the prefetcher.
pool_async_xda_reads - Buffer pool asynchronous XDA data reads monitor element
Indicates the number of XML storage object (XDA) data pages read in from the table space containers (physical) by asynchronous engine dispatchable units (EDUs) for all types of table spaces.
Table Function | Monitor Element Collection Command and Level |
---|---|
MON_GET_BUFFERPOOL table function - Get buffer pool metrics | DATA OBJECT METRICS BASE |
MON_GET_DATABASE table function - Get database level information | DATA OBJECT METRICS BASE |
MON_GET_DATABASE_DETAILS table function - Get database information metrics | DATA OBJECT METRICS BASE |
MON_GET_TABLESPACE table function - Get table space metrics | DATA OBJECT METRICS BASE |
Snapshot Level | Logical Data Grouping | Monitor Switch |
---|---|---|
Database | dbase | Buffer Pool |
Table Space | tablespace | Buffer Pool |
Buffer Pool | bufferpool | Buffer Pool |
Event Type | Logical Data Grouping | Monitor Switch |
---|---|---|
Database | event_db | Always collected |
Tablespaces | event_tablespace | Always collected |
Usage
pool_xda_p_reads + pool_temp_xda_p_reads - pool_async_xda_reads
By comparing the ratio of asynchronous to synchronous reads, you can gain insight into how well the prefetchers are working. This element can be helpful when you are tuning the num_ioservers configuration parameter.
Asynchronous reads are performed by database manager prefetchers.
pool_async_xda_writes - Buffer pool asynchronous XDA data writes monitor element
The number of times a buffer pool data page for an XML storage object (XDA) was physically written to disk by either an asynchronous page cleaner, or a prefetcher. A prefetcher may have written dirty pages to disk to make space for the pages being prefetched.
Table Function | Monitor Element Collection Command and Level |
---|---|
MON_GET_BUFFERPOOL table function - Get buffer pool metrics | DATA OBJECT METRICS BASE |
MON_GET_DATABASE table function - Get database level information | DATA OBJECT METRICS BASE |
MON_GET_DATABASE_DETAILS table function - Get database information metrics | DATA OBJECT METRICS BASE |
MON_GET_TABLESPACE table function - Get table space metrics | DATA OBJECT METRICS BASE |
Snapshot Level | Logical Data Grouping | Monitor Switch |
---|---|---|
Database | dbase | Buffer Pool |
Table Space | tablespace | Buffer Pool |
Buffer Pool | bufferpool | Buffer Pool |
Event Type | Logical Data Grouping | Monitor Switch |
---|---|---|
Database | event_db | Always collected |
Tablespaces | event_tablespace | Always collected |
- Usage
- You can use this element with the pool_xda_writes monitor
element to calculate the number of physical write requests that were
performed synchronously on XML storage object data pages (that is,
physical data page writes that were performed by database
manager agents
on XML data). Use the following formula:
pool_xda_writes - pool_async_xda_writes
By comparing the ratio of asynchronous to synchronous writes, you can gain insight into how well the buffer pool page cleaners are performing. This ratio can be helpful when you are tuning the num_iocleaners configuration parameter.
pool_col_gbp_indep_pages_found_in_lbp - Buffer pool column-organized GBP independent pages found in local buffer pool monitor element
The number of group buffer pool (GBP) independent column-organized pages found in a local buffer pool (LBP) by an agent.
Event Type | Logical Data Grouping | Monitor Switch |
---|---|---|
Activities | event_activity (reported in the details_xml document) | ACTIVITY METRICS BASE |
Activities | event_activitymetrics | ACTIVITY METRICS BASE |
Statistics | event_scstats (reported in the metrics document) | REQUEST METRICS BASE |
Statistics | event_wlstats (reported in the metrics document) | REQUEST METRICS BASE |
Unit of work | Reported in the system_metrics document. | REQUEST METRICS BASE |
Package cache | - | ACTIVITY METRICS BASE |
Locking | - | Always collected |
pool_col_gbp_invalid_pages - Buffer pool column-organized GBP invalid data pages monitor element
The number of times a column-organized page was invalid in the local buffer pool and was read from the group buffer pool instead. Outside of a DB2 pureScale environment, this value is null.
Event Type | Logical Data Grouping | Monitor Switch |
---|---|---|
Activities | event_activity (reported in the details_xml document) | ACTIVITY METRICS BASE |
Activities | event_activitymetrics | ACTIVITY METRICS BASE |
Statistics | event_scstats (reported in the metrics document) | REQUEST METRICS BASE |
Statistics | event_wlstats (reported in the metrics document) | REQUEST METRICS BASE |
Unit of work | Reported in the system_metrics document. | REQUEST METRICS BASE |
Package cache | - | ACTIVITY METRICS BASE |
Locking | - | Always collected |
pool_col_gbp_l_reads - Buffer pool column-organized GBP logical reads monitor element
The number of column-organized pages which have been requested from the buffer pool (logical) for regular and large table spaces.
Event Type | Logical Data Grouping | Monitor Switch |
---|---|---|
Activities | event_activity (reported in the details_xml document) | ACTIVITY METRICS BASE |
Activities | event_activitymetrics | ACTIVITY METRICS BASE |
Statistics | event_scstats (reported in the metrics document) | REQUEST METRICS BASE |
Statistics | event_wlstats (reported in the metrics document) | REQUEST METRICS BASE |
Unit of work | Reported in the system_metrics document. | REQUEST METRICS BASE |
Package cache | - | ACTIVITY METRICS BASE |
Locking | - | Always collected |
Usage
(pool_col_lbp_pages_found - pool_async_col_lbp_pages_found) / pool_col_l_reads
To
determine how many times a requested page was found in the group bufferpool,
use the following formula(pool_col_gbp_l_reads - pool_col_gbp_p_reads) / pool_col_gbp_l_reads
Local
buffer pool and group buffer pool hit rates are both important factors
in the overall performance of the cluster caching facility.
Using these formulas can help you determine whether the local or group
buffer pool may be a limiting factor on your database's throughput. pool_col_gbp_p_reads - Buffer pool column-organized GBP physical reads monitor element
Indicates the number of column-organized pages read in from the table space containers (physical) for regular and large table spaces.
Event Type | Logical Data Grouping | Monitor Switch |
---|---|---|
Activities | event_activity (reported in the details_xml document) | ACTIVITY METRICS BASE |
Activities | event_activitymetrics | ACTIVITY METRICS BASE |
Statistics | event_scstats (reported in the metrics document) | REQUEST METRICS BASE |
Statistics | event_wlstats (reported in the metrics document) | REQUEST METRICS BASE |
Unit of work | Reported in the system_metrics document. | REQUEST METRICS BASE |
Package cache | - | ACTIVITY METRICS BASE |
Locking | - | Always collected |
Usage
(pool_col_lbp_pages_found - pool_async_col_lbp_pages_found) / pool_col_l_reads
To
determine how many times a requested page was found in the group bufferpool,
use the following formula(pool_col_gbp_l_reads - pool_col_gbp_p_reads) / pool_col_gbp_l_reads
Local
buffer pool and group buffer pool hit rates are both important factors
in the overall performance of the cluster caching facility.
Using these formulas can help you determine whether the local or group
buffer pool may be a limiting factor on your database's throughput. pool_col_l_reads - Buffer pool column-organized logical reads monitor element
Indicates the number of column-organized pages requested from the buffer pool (logical) for regular and large table spaces.
Event Type | Logical Data Grouping | Monitor Switch |
---|---|---|
Activities | event_activity (reported in the details_xml document) | ACTIVITY METRICS BASE |
Activities | event_activitymetrics | ACTIVITY METRICS BASE |
Statistics | event_scstats (reported in the details_xml document) | REQUEST METRICS BASE |
Statistics | event_wlstats (reported in the details_xml document) | REQUEST METRICS BASE |
Unit of work | Reported in the system_metrics document. | REQUEST METRICS BASE |
Activities | event_activity | Buffer Pool, Statement |
Package cache | Reported in the activity_metrics document. | ACTIVITY METRICS BASE |
Usage
- Already in the buffer pool when the database manager needs to process the page.
- Read into the buffer pool before the database manager can process the page.
Column-organized monitor elements can help you understand what portion of the I/O is being driven by access to column-organized tables when a workload includes a mixture of both row-organized and column-organized tables.
(pool_col_lbp_pages_found - pool_async_col_lbp_pages_found) / pool_col_l_reads
Increasing buffer pool size will generally improve the hit ratio, but you will reach a point of diminishing return. Ideally, if you could allocate a buffer pool large enough to store your entire database, then once the system is up and running you would get a hit ratio of 100%. However, this is unrealistic in most cases. The significance of the hit ratio really depends on the size of your data, and the way it is accessed. A very large database where data is accessed evenly would have a poor hit ratio. There is little you can do with very large tables.
To improve hit ratios for smaller, frequently accessed tables and indexes, assign them to individual buffer pools.
pool_col_lbp_pages_found - Buffer pool column-organized LBP pages found monitor element
Indicates the number of times that a column-organized page was present in the local buffer pool.
Event Type | Logical Data Grouping | Monitor Switch |
---|---|---|
Activities | event_activity (reported in the details_xml document) | ACTIVITY METRICS BASE |
Activities | event_activitymetrics | ACTIVITY METRICS BASE |
Statistics | event_scstats (reported in the metrics document) | REQUEST METRICS BASE |
Statistics | event_wlstats (reported in the metrics document) | REQUEST METRICS BASE |
Unit of work | Reported in the activity_metrics document. | REQUEST METRICS BASE |
Package cache | Reported in the activity_metrics document. | ACTIVITY METRICS BASE |
Locking | - | Always collected |
Usage
(pool_col_lbp_pages_found - pool_async_col_lbp_pages_found) / pool_col_l_reads
To
determine how many times a requested page was found in the group bufferpool,
use the following formula(pool_col_gbp_l_reads - pool_col_gbp_p_reads) / pool_col_gbp_l_reads
Local
buffer pool and group buffer pool hit rates are both important factors
in the overall performance of the cluster caching facility.
Using these formulas can help you determine whether the local or group
buffer pool may be a limiting factor on your database's throughput. Column-organized monitor elements can help you understand what portion of the I/O is being driven by access to column-organized tables when a workload includes a mixture of both row-organized and column-organized tables.
pool_col_p_reads - Buffer pool column-organized physical reads monitor element
Indicates the number of column-organized pages read in from the table space containers (physical) for regular and large table spaces.
Event Type | Logical Data Grouping | Monitor Switch |
---|---|---|
Activities | event_activity (reported in the details_xml document) | ACTIVITY METRICS BASE |
Activities | event_activitymetrics | ACTIVITY METRICS BASE |
Statistics | event_scstats (reported in the metrics document) | REQUEST METRICS BASE |
Statistics | event_wlstats (reported in the metrics document) | REQUEST METRICS BASE |
Unit of work | Reported in the system_metrics document. | REQUEST METRICS BASE |
Activities | event_activity | Buffer Pool, Statement |
Package cache | Reported in the activity_metrics document. | ACTIVITY METRICS BASE |
pool_col_writes - Buffer pool column-organized writes monitor element
The number of times a buffer pool column-organized page was physically written to disk.
Event Type | Logical Data Grouping | Monitor Switch |
---|---|---|
Activities | event_activity (reported in the details_xml document) | ACTIVITY METRICS BASE |
Activities | event_activitymetrics | ACTIVITY METRICS BASE |
Statistics | event_scstats (reported in the metrics document) | REQUEST METRICS BASE |
Statistics | event_wlstats (reported in the metrics document) | REQUEST METRICS BASE |
Unit of work | Reported in the system_metrics document. | REQUEST METRICS BASE |
Package cache | Reported in the activity_metrics document. | ACTIVITY METRICS BASE |
Usage
If a buffer pool data page is written to disk for a high percentage of the value of the pool_col_p_reads monitor element, you may be able to improve performance by increasing the number of buffer pool pages available for the database.
- To free a page in the buffer pool so another page can be read
- To flush the buffer pool
The system does not always write a page to make room for a new one. If the page has not been updated, it can simply be replaced. This replacement is not counted for this element.
The data page can be written by an asynchronous page-cleaner agent before the buffer pool space is required, as reported by the pool_async_col_writes monitor element. These asynchronous page writes are included in the value of this element in addition to synchronous page writes.
- Run your application (to load the buffer).
- Note the value of this element.
- Run your application again.
- Subtract the value recorded in step 2 from the new value of this element.
- Activate the database with the ACTIVATE DATABASE command.
- Have an idle application connected to the database.
If all applications are updating the database, increasing the size of the buffer pool may not have much impact on performance since most of the buffer pool pages contain updated data, which must be written to disk. However, if the updated pages can be used by other units of work before being written out, the buffer pool can save a write and a read, which will improve your performance.
pool_config_size - Configured Size of Memory Pool monitor element
The internally configured size of a memory pool in DB2 database system. The value is given in bytes.
Snapshot Level | Logical Data Grouping | Monitor Switch |
---|---|---|
Database Manager | memory_pool | Basic |
Database | memory_pool | Basic |
Application | memory_pool | Basic |
Event Type | Logical Data Grouping | Monitor Switch |
---|---|---|
Database | event_dbmemuse | Always collected |
Connection | event_connmemuse | Always collected |
- Usage
- To track system memory usage, use this value in conjunction with pool_cur_size, pool_id, and pool_watermark.
To see if a memory pool is nearly full, compare pool_config_size to pool_cur_size. For example, assume that the utility heap is too small. You can diagnose this specific problem by taking snapshots at regular intervals, and looking in the utility heap section of the snapshot output. If required, the pool_cur_size might be allowed to exceed the pool_config_size to prevent an out of memory failure. If this occurs very infrequently, no further action is likely required. However if pool_cur_size is consistently close to or larger than pool_config_size, you might consider increasing the size of the utility heap.
pool_cur_size - Current Size of Memory Pool monitor element
The current size of a memory pool. The value is given in bytes.
Snapshot Level | Logical Data Grouping | Monitor Switch |
---|---|---|
Database Manager | memory_pool | Basic |
Database | memory_pool | Basic |
Application | memory_pool | Basic |
Event Type | Logical Data Grouping | Monitor Switch |
---|---|---|
Database | event_dbmemuse | Always collected |
Connection | event_connmemuse | Always collected |
- Usage
- To track system memory usage, use this value in conjunction with pool_config_size, pool_id, and pool_watermark.
To see if a memory pool is nearly full, compare pool_config_size to pool_cur_size. For example, assume that the utility heap is too small. You can diagnose this specific problem by taking snapshots at regular intervals, and looking in the utility heap section of the snapshot output. If the value of pool_cur_size is consistently close to pool_config_size, you may want to consider increasing the size of the utility heap.
pool_data_gbp_indep_pages_found_in_lbp - Group buffer pool independent data pages found in local buffer pool monitor element
The number of group buffer pool (GBP) independent data pages found in a local buffer pool (LBP) by an agent. Outside of a DB2 pureScale environment, this value is null.
Event Type | Logical Data Grouping | Monitor Switch |
---|---|---|
Activities | event_activitymetrics | ACTIVITY METRICS BASE |
Package Cache | pkgcache_metrics | ACTIVITY METRICS BASE |
Statistics | event_scmetrics event_wlmetrics |
REQUEST METRICS BASE |
Unit of Work | uow_metrics | REQUEST METRICS BASE |
pool_data_gbp_invalid_pages - Group buffer pool invalid data pages monitor element
The number of times a data page was invalid in the local buffer pool and was read from the group buffer pool instead. Outside of a DB2 pureScale environment, this value is null.
Event Type | Logical Data Grouping | Monitor Switch |
---|---|---|
Activities | event_activity (reported in the details_xml document) | ACTIVITY METRICS BASE |
Activities | event_activitymetrics | ACTIVITY METRICS BASE |
Statistics | event_scstats (reported in the metrics document) | REQUEST METRICS BASE |
Statistics | event_wlstats (reported in the metrics document) | REQUEST METRICS BASE |
Unit of work | uow_metrics- | REQUEST METRICS BASE |
Package cache | pkgcache_metrics | ACTIVITY METRICS BASE |
Usage
(pool_data_lbp_pages_found - pool_async_data_lbp_pages_found) / pool_data_l_reads
To
determine how many times a requested page was found in the group bufferpool,
use the following formula(pool_data_gbp_l_reads - pool_data_gbp_p_reads) / pool_data_gbp_l_reads
Local
buffer pool and group buffer pool hit rates are both important factors
in the overall performance of the cluster caching facility.
Using these formulas can help you determine whether the local or group
buffer pool may be a limiting factor on your database's throughput. pool_data_gbp_l_reads - Group buffer pool data logical reads monitor element
The number of times a Group Buffer Pool (GBP) dependent data page was attempted to be read from the group buffer pool because the page was either invalid or not present in the Local Buffer Pool (LBP). Outside of a DB2 pureScale environment, this value is null.
Event Type | Logical Data Grouping | Monitor Switch |
---|---|---|
Activities | event_activity (reported in the details_xml document) | ACTIVITY METRICS BASE |
Activities | event_activitymetrics | ACTIVITY METRICS BASE |
Statistics | event_scstats (reported in the metrics document) | REQUEST METRICS BASE |
Statistics | event_wlstats (reported in the metrics document) | REQUEST METRICS BASE |
Unit of work | uow_metrics- | REQUEST METRICS BASE |
Package cache | pkgcache_metrics | ACTIVITY METRICS BASE |
Usage
(pool_data_lbp_pages_found - pool_async_data_lbp_pages_found) / pool_data_l_reads
To
determine how many times a requested page was found in the group bufferpool,
use the following formula(pool_data_gbp_l_reads - pool_data_gbp_p_reads) / pool_data_gbp_l_reads
Local
buffer pool and group buffer pool hit rates are both important factors
in the overall performance of the cluster caching facility.
Using these formulas can help you determine whether the local or group
buffer pool may be a limiting factor on your database's throughput. pool_data_gbp_p_reads - Group buffer pool data physical reads monitor element
The number of times a Group Buffer Pool (GBP) dependent data page was read into the local buffer pool from disk because it was not found in the GBP. Outside of a DB2 pureScale environment, this value is null.
Event Type | Logical Data Grouping | Monitor Switch |
---|---|---|
Activities | event_activity (reported in the details_xml document) | ACTIVITY METRICS BASE |
Activities | event_activitymetrics | ACTIVITY METRICS BASE |
Statistics | event_scstats (reported in the metrics document) | REQUEST METRICS BASE |
Statistics | event_wlstats (reported in the metrics document) | REQUEST METRICS BASE |
Unit of work | uow_metrics- | REQUEST METRICS BASE |
Package cache | pkgcache_metrics | ACTIVITY METRICS BASE |
Usage
(pool_data_lbp_pages_found - pool_async_data_lbp_pages_found) / pool_data_l_reads
To
determine how many times a requested page was found in the group bufferpool,
use the following formula(pool_data_gbp_l_reads - pool_data_gbp_p_reads) / pool_data_gbp_l_reads
Local
buffer pool and group buffer pool hit rates are both important factors
in the overall performance of the cluster caching facility.
Using these formulas can help you determine whether the local or group
buffer pool may be a limiting factor on your database's throughput. pool_data_lbp_pages_found - Local buffer pool found data pages monitor element
The number of times a data page was present in the local buffer pool.
Event Type | Logical Data Grouping | Monitor Switch |
---|---|---|
Activities | event_activity (reported in the details_xml document) | ACTIVITY METRICS BASE |
Activities | event_activitymetrics | ACTIVITY METRICS BASE |
Statistics | event_scstats (reported in the metrics document) | REQUEST METRICS BASE |
Statistics | event_wlstats (reported in the metrics document) | REQUEST METRICS BASE |
Unit of work | uow_metrics- | REQUEST METRICS BASE |
Package cache | pkgcache_metrics | ACTIVITY METRICS BASE |
Usage
(pool_data_lbp_pages_found - pool_async_data_lbp_pages_found) / pool_data_l_reads
To
determine how many times a requested page was found in the group bufferpool,
use the following formula(pool_data_gbp_l_reads - pool_data_gbp_p_reads) / pool_data_gbp_l_reads
Local
buffer pool and group buffer pool hit rates are both important factors
in the overall performance of the cluster caching facility.
Using these formulas can help you determine whether the local or group
buffer pool may be a limiting factor on your database's throughput. pool_data_l_reads - Buffer pool data logical reads monitor element
The number of data pages which have been requested from the buffer pool (logical) for regular and large table spaces.
Snapshot Level | Logical Data Grouping | Monitor Switch |
---|---|---|
Database | dbase | Buffer Pool |
Table Space | tablespace | Buffer Pool |
Buffer Pool | bufferpool | Buffer Pool |
Application | appl | Buffer Pool |
Application | stmt | Buffer Pool |
Dynamic SQL | dynsql | Buffer Pool, Statement |
Event Type | Logical Data Grouping | Monitor Switch |
---|---|---|
Activities | event_activity (reported in the details_xml document) | ACTIVITY METRICS BASE |
Activities | event_activitymetrics | ACTIVITY METRICS BASE |
Statistics | event_scstats (reported in the details_xml document) | REQUEST METRICS BASE |
Statistics | event_wlstats (reported in the details_xml document) | REQUEST METRICS BASE |
Unit of work | Reported in the system_metrics document. | REQUEST METRICS BASE |
Database | event_db | Always collected |
Tablespaces | event_tablespace | Always collected |
Connection | event_conn | Always collected |
Statement | event_stmt | Always collected |
Activities | event_activity | Buffer Pool, Statement |
Package cache | Reported in the activity_metrics document. | ACTIVITY METRICS BASE |
Usage
- Already in the buffer pool when the database manager needs to process the page.
- Read into the buffer pool before the database manager can process the page.
((pool_data_lbp_pages_found
- pool_async_data_lbp_pages_found) / (pool_data_l_reads + pool_temp_data_l_reads))
× 100
For more information, see Formulas for calculating buffer pool hit ratios . Increasing buffer pool size will generally improve the hit ratio, but you will reach a point of diminishing return. Ideally, if you could allocate a buffer pool large enough to store your entire database, then once the system is up and running you would get a hit ratio of 100%. However, this is unrealistic in most cases. The significance of the hit ratio really depends on the size of your data, and the way it is accessed. A very large database where data is accessed evenly would have a poor hit ratio. There is little you can do with very large tables.
To improve hit ratios for smaller, frequently accessed tables and indexes, assign them to individual buffer pools.
pool_data_p_reads - Buffer pool data physical reads monitor element
Indicates the number of data pages read in from the table space containers (physical) for regular and large table spaces.
Snapshot Level | Logical Data Grouping | Monitor Switch |
---|---|---|
Database | dbase | Buffer Pool |
Table Space | tablespace | Buffer Pool |
Buffer Pool | bufferpool | Buffer Pool |
Application | appl | Buffer Pool |
Application | stmt | Buffer Pool |
Dynamic SQL | dynsql | Buffer Pool, Statement |
Event Type | Logical Data Grouping | Monitor Switch |
---|---|---|
Activities | event_activity (reported in the details_xml document) | ACTIVITY METRICS BASE |
Activities | event_activitymetrics | ACTIVITY METRICS BASE |
Statistics | event_scstats (reported in the metrics document) | REQUEST METRICS BASE |
Statistics | event_wlstats (reported in the metrics document) | REQUEST METRICS BASE |
Unit of work | Reported in the system_metrics document. | REQUEST METRICS BASE |
Database | event_db | Always collected |
Tablespaces | event_tablespace | Always collected |
Connection | event_conn | Always collected |
Statement | event_stmt | Always collected |
Activities | event_activity | Buffer Pool, Statement |
Package cache | Reported in the activity_metrics document. | ACTIVITY METRICS BASE |
Usage
(pool_async_data_reads + pool_async_index_reads)/
(pool_data_p_reads + pool_temp_data_p_reads + pool_index_p_reads + pool_temp_index_p_reads)
pool_data_writes - Buffer pool data writes monitor element
The number of times a buffer pool data page was physically written to disk.
Snapshot Level | Logical Data Grouping | Monitor Switch |
---|---|---|
Database | dbase | Buffer Pool |
Table Space | tablespace | Buffer Pool |
Buffer Pool | bufferpool | Buffer Pool |
Application | appl | Buffer Pool |
Event Type | Logical Data Grouping | Monitor Switch |
---|---|---|
Activities | event_activity (reported in the details_xml document) | ACTIVITY METRICS BASE |
Activities | event_activitymetrics | ACTIVITY METRICS BASE |
Statistics | event_scstats (reported in the metrics document) | REQUEST METRICS BASE |
Statistics | event_wlstats (reported in the metrics document) | REQUEST METRICS BASE |
Unit of work | Reported in the system_metrics document. | REQUEST METRICS BASE |
Database | event_db | Always collected |
Tablespaces | event_tablespace | Always collected |
Connection | event_conn | Always collected |
Package cache | Reported in the activity_metrics document. | ACTIVITY METRICS BASE |
Usage
If a buffer pool data page is written to disk for a high percentage of the value of the pool_data_p_reads monitor element, you may be able to improve performance by increasing the number of buffer pool pages available for the database.
- To free a page in the buffer pool so another page can be read
- To flush the buffer pool
The system does not always write a page to make room for a new one. If the page has not been updated, it can simply be replaced. This replacement is not counted for this element.
The data page can be written by an asynchronous page-cleaner agent before the buffer pool space is required, as reported by the pool_async_data_writes monitor element. These asynchronous page writes are included in the value of this element in addition to synchronous page writes.
- Run your application (to load the buffer).
- Note the value of this element.
- Run your application again.
- Subtract the value recorded in step 2 from the new value of this element.
- Activate the database with the ACTIVATE DATABASE command.
- Have an idle application connected to the database.
If all applications are updating the database, increasing the size of the buffer pool may not have much impact on performance since most of the buffer pool pages contain updated data, which must be written to disk. However, if the updated pages can be used by other units of work before being written out, the buffer pool can save a write and a read, which will improve your performance.
pool_drty_pg_steal_clns - Buffer pool victim page cleaners triggered monitor element
The number of times a page cleaner was invoked because a synchronous write was needed during the victim buffer replacement for the database.
Table Function | Monitor Element Collection Level |
---|---|
MON_GET_BUFFERPOOL table function - Get buffer pool metrics | DATA OBJECT METRICS BASE |
MON_GET_DATABASE table function - Get database level information | DATA OBJECT METRICS BASE |
MON_GET_DATABASE_DETAILS table function - Get database information metrics | DATA OBJECT METRICS BASE |
Snapshot Level | Logical Data Grouping | Monitor Switch |
---|---|---|
Database | dbase | Buffer Pool |
Event Type | Logical Data Grouping | Monitor Switch |
---|---|---|
Database | event_db | Always collected |
Usage
pool_drty_pg_steal_clns
/ (pool_drty_pg_steal_clns
+ pool_drty_pg_thrsh_clns
+ pool_lsn_gap_clns)
If this ratio is low, it may indicate that you have defined too many page cleaners. If your chngpgs_thresh configuration parameter is set too low, you may be writing out pages that you will dirty later. Aggressive cleaning defeats one purpose of the buffer pool, that is to defer writing to the last possible moment.
If this ratio is high, it may indicate that you have not defined enough page cleaners. Not having enough page cleaners increases recovery time after failures.
- The pool_drty_pg_steal_clns monitor element is inserted into the monitor stream.
- The pool_drty_pg_steal_clns monitor element counts the number of times a page cleaner was invoked because a synchronous write was needed during the victim buffer replacement for the database.
- The pool_drty_pg_steal_clns monitor element inserts 0 into the monitor stream.
- There is no explicit triggering of the page cleaners when a synchronous write is needed during victim buffer replacement. To determine whether or not the right number of page cleaners is configured for the database or for specific buffer pools, refer to the pool_no_victim_buffer monitor element.
pool_drty_pg_thrsh_clns - Buffer pool threshold cleaners triggered monitor element
The number of times a page cleaner was invoked because a buffer pool had reached the dirty page threshold criterion for the database.
Table Function | Monitor Element Collection Command and Level |
---|---|
MON_GET_BUFFERPOOL table function - Get buffer pool metrics | DATA OBJECT METRICS BASE |
MON_GET_DATABASE table function - Get database level information | DATA OBJECT METRICS BASE |
MON_GET_DATABASE_DETAILS table function - Get database information metrics | DATA OBJECT METRICS BASE |
Snapshot Level | Logical Data Grouping | Monitor Switch |
---|---|---|
Database | dbase | Buffer Pool |
Event Type | Logical Data Grouping | Monitor Switch |
---|---|---|
Database | event_db | Always collected |
- Usage
- The threshold is set by the chngpgs_thresh configuration
parameter. It is a percentage applied to the buffer pool size. When
the number of dirty pages in the pool exceeds this value, the cleaners
are triggered.
If the chngpgs_thresh configuration parameter value is set too low, pages might be written out too early, requiring them to be read back in. If it is set too high, then too many pages may accumulate, requiring users to write out pages synchronously.
When the DB2_USE_ALTERNATE_PAGE_CLEANING registry variable is OFF:- The pool_drty_pg_thrsh_clns monitor element is inserted into the monitor stream.
- The pool_drty_pg_thrsh_clns monitor element counts the number of times a page cleaner was invoked because a buffer pool had reached the dirty page threshold criterion for the database.
When the DB2_USE_ALTERNATE_PAGE_CLEANING registry variable is ON:- The pool_drty_pg_thrsh_clns monitor element inserts 0 into the monitor stream.
- Page cleaners are always active, attempting to ensure there are sufficient free buffers for victims available instead of waiting to be triggered by the criterion value.
pool_failed_async_col_reqs - Failed column-organized prefetch requests monitor element
The number of times an attempt to queue a column-organized prefetch request was made but failed. One possible reason is the prefetch queue is full.
Event type | Logical data grouping | Monitor switch |
---|---|---|
Activities | event_activity (reported in the details_xml document) event_activitymetrics | ACTIVITY METRICS BASE |
Package Cache | pkgcache (reported in the metrics.xml document) pkgcache_metrics | ACTIVITY METRICS BASE |
Unit of Work | uow (reported in the metrics.xml document) uow_metrics | REQUEST METRICS BASE |
Statistics | event_scstats (reported in the metrics document) | REQUEST METRICS BASE |
Statistics | event_wlstats (reported in the metrics document) | REQUEST METRICS BASE |
pool_failed_async_data_reqs - Failed data prefetch requests monitor element
The number of times an attempt to queue a data prefetch request was made but failed. One possible reason is the prefetch queue is full.
Event Type | Logical Data Grouping | Monitor Switch |
---|---|---|
Activities | event_activity (reported in details_xml document) event_activitymetrics |
ACTIVITY METRICS BASE |
Package Cache | pkgcache (reported in
the metrics.xml document) pkgcache_metrics |
ACTIVITY METRICS BASE |
Unit of Work | uow (reported in the metrics.xml
document) uow_metrics |
REQUEST METRICS BASE |
Statistics | event_scstats (reported in the metrics document) | REQUEST METRICS BASE |
Statistics | event_wlstats (reported in the metrics document) | REQUEST METRICS BASE |
Usage
This element, along with the other pool_failed_async_..._reqs elements tells you how many prefetch requests could not be added to a prefetch queue. Requests can fail to be added to the prefetch queue if the prefetch queue is too small, or if the prefetcher is running too slowly. When requests cannot be added to a prefetch queue, a database agent typically performs disk IO synchronously, which is less efficient than prefetching. You can use these elements in conjunction with other prefetcher monitor elements to determine how effectively prefetching is done on your system. For example, you can see what the percentage of requests were successfully added to the prefetch queue using a formula like the one that follows: 1 -
(
POOL_FAILED_ASYNC_DATA_REQS +
POOL_FAILED_ASYNC_INDEX_REQS +
POOL_FAILED_ASYNC_XDA_REQS +
POOL_FAILED_ASYNC_COL_REQS +
POOL_FAILED_ASYNC_TEMP_DATA_REQS +
POOL_FAILED_ASYNC_TEMP_INDEX_REQS +
POOL_FAILED_ASYNC_TEMP_XDA_REQS +
POOL_FAILED_ASYNC_TEMP_COL_REQS
)
÷
(
(
POOL_FAILED_ASYNC_DATA_REQS +
POOL_FAILED_ASYNC_INDEX_REQS +
POOL_FAILED_ASYNC_XDA_REQS +
POOL_FAILED_ASYNC_COL_REQS +
POOL_FAILED_ASYNC_TEMP_DATA_REQS +
POOL_FAILED_ASYNC_TEMP_INDEX_REQS +
POOL_FAILED_ASYNC_TEMP_XDA_REQS +
POOL_FAILED_ASYNC_TEMP_COL_REQS
)
+
(
POOL_QUEUED_ASYNC_DATA_REQS +
POOL_QUEUED_ASYNC_INDEX_REQS +
POOL_QUEUED_ASYNC_XDA_REQS +
POOL_QUEUED_ASYNC_COL_REQS +
POOL_QUEUED_ASYNC_TEMP_DATA_REQS +
POOL_QUEUED_ASYNC_TEMP_INDEX_REQS +
POOL_QUEUED_ASYNC_TEMP_XDA_REQS +
POOL_QUEUED_ASYNC_TEMP_COL_REQS
)
) × 100
This formula calculates the ratio of
successful prefetch requests to the total number of requests made.
A failed prefetch request is one that could not be added to the prefetch
queue. Requests can fail to be added to the prefetch queue if there
are a large number of requests being created, or if the prefetcher
is running too slowly due to poor configuration or poor tuning. If
the percentage of successful requests is low, this can indicate a
bottleneck in the prefetching mechanism. You might need to configure
more prefetchers by modifying the value for the configuration parameter num_ioservers.
The condition of prefetch queues being full can also be caused by
agents submitting too many small requests; you can use the related
monitor elements pool_queued_async_..._pages and pool_queued_async_..._reqs to
determine the average prefetch request size.pool_failed_async_index_reqs - Failed index prefetch requests monitor element
The number of times an attempt to queue an index prefetch request was made but failed. One possible reason is the prefetch queue is full and a request could not be obtained from the free list.
Event Type | Logical Data Grouping | Monitor Switch |
---|---|---|
Activities | event_activity (reported in details_xml document) event_activitymetrics |
ACTIVITY METRICS BASE |
Package Cache | pkgcache (reported in
the metrics.xml document) pkgcache_metrics |
ACTIVITY METRICS BASE |
Unit of Work | uow (reported in the metrics.xml
document) uow_metrics |
REQUEST METRICS BASE |
Statistics | event_scstats (reported in the metrics document) | REQUEST METRICS BASE |
Statistics | event_wlstats (reported in the metrics document) | REQUEST METRICS BASE |
Usage
This element, along with the other pool_failed_async_..._reqs elements tells you how many prefetch requests could not be added to a prefetch queue. Requests can fail to be added to the prefetch queue if the prefetch queue is too small, or if the prefetcher is running too slowly. When requests cannot be added to a prefetch queue, a database agent typically performs disk IO synchronously, which is less efficient than prefetching. You can use these elements in conjunction with other prefetcher monitor elements to determine how effectively prefetching is done on your system. For example, you can see what the percentage of requests were successfully added to the prefetch queue using a formula like the one that follows: 1 -
(
POOL_FAILED_ASYNC_DATA_REQS +
POOL_FAILED_ASYNC_INDEX_REQS +
POOL_FAILED_ASYNC_XDA_REQS +
POOL_FAILED_ASYNC_COL_REQS +
POOL_FAILED_ASYNC_TEMP_DATA_REQS +
POOL_FAILED_ASYNC_TEMP_INDEX_REQS +
POOL_FAILED_ASYNC_TEMP_XDA_REQS +
POOL_FAILED_ASYNC_TEMP_COL_REQS
)
÷
(
(
POOL_FAILED_ASYNC_DATA_REQS +
POOL_FAILED_ASYNC_INDEX_REQS +
POOL_FAILED_ASYNC_XDA_REQS +
POOL_FAILED_ASYNC_COL_REQS +
POOL_FAILED_ASYNC_TEMP_DATA_REQS +
POOL_FAILED_ASYNC_TEMP_INDEX_REQS +
POOL_FAILED_ASYNC_TEMP_XDA_REQS +
POOL_FAILED_ASYNC_TEMP_COL_REQS
)
+
(
POOL_QUEUED_ASYNC_DATA_REQS +
POOL_QUEUED_ASYNC_INDEX_REQS +
POOL_QUEUED_ASYNC_XDA_REQS +
POOL_QUEUED_ASYNC_COL_REQS +
POOL_QUEUED_ASYNC_TEMP_DATA_REQS +
POOL_QUEUED_ASYNC_TEMP_INDEX_REQS +
POOL_QUEUED_ASYNC_TEMP_XDA_REQS +
POOL_QUEUED_ASYNC_TEMP_COL_REQS
)
) × 100
This formula calculates the ratio of
successful prefetch requests to the total number of requests made.
A failed prefetch request is one that could not be added to the prefetch
queue. Requests can fail to be added to the prefetch queue if there
are a large number of requests being created, or if the prefetcher
is running too slowly due to poor configuration or poor tuning. If
the percentage of successful requests is low, this can indicate a
bottleneck in the prefetching mechanism. You might need to configure
more prefetchers by modifying the value for the configuration parameter num_ioservers.
The condition of prefetch queues being full can also be caused by
agents submitting too many small requests; you can use the related
monitor elements pool_queued_async_..._pages and pool_queued_async_..._reqs to
determine the average prefetch request size.pool_failed_async_temp_col_reqs - Failed column-organized temporary prefetch requests monitor element
The number of times an attempt to queue a column-organized prefetch request for temporary table spaces was made but failed. One possible reason is the prefetch queue is full and a request could not be obtained from the free list.
Event type | Logical data grouping | Monitor switch |
---|---|---|
Activities | event_activity (reported in the details_xml document) event_activitymetrics | ACTIVITY METRICS BASE |
Package Cache | pkgcache (reported in the metrics.xml document) pkgcache_metrics | ACTIVITY METRICS BASE |
Unit of Work | uow (reported in the metrics.xml document) uow_metrics | REQUEST METRICS BASE |
Statistics | event_scstats (reported in the metrics document) | REQUEST METRICS BASE |
Statistics | event_wlstats (reported in the metrics document) | REQUEST METRICS BASE |
Usage
1 -
(
pool_failed_async_col_reqs +
pool_failed_async_data_reqs +
pool_failed_async_index_reqs +
pool_failed_async_xda_reqs +
pool_failed_async_temp_col_reqs +
pool_failed_async_temp_data_reqs +
pool_failed_async_temp_index_reqs +
pool_failed_async_temp_xda_reqs
)
÷
(
(
pool_failed_async_col_reqs +
pool_failed_async_data_reqs +
pool_failed_async_index_reqs +
pool_failed_async_xda_reqs +
pool_failed_async_temp_col_reqs +
pool_failed_async_temp_data_reqs +
pool_failed_async_temp_index_reqs +
pool_failed_async_temp_xda_reqs
)
+
(
pool_queued_async_col_reqs +
pool_queued_async_data_reqs +
pool_queued_async_index_reqs +
pool_queued_async_xda_reqs +
pool_queued_async_temp_col_reqs +
pool_queued_async_temp_data_reqs +
pool_queued_async_temp_index_reqs +
pool_queued_async_temp_xda_reqs
)
) × 100
This formula calculates the ratio of
successful prefetch requests to the total number of requests made.
A failed prefetch request is one that could not be added to the prefetch
queue. Requests can fail to be added to the prefetch queue if there
are many requests being created, or if the prefetcher is running too
slowly due to poor configuration or poor tuning. If the percentage
of successful requests is low, this can indicate a bottleneck in the
prefetching mechanism. You might need to configure more prefetchers
by modifying the value for the num_ioservers configuration
parameter. The condition of prefetch queues being full can also be
caused by agents submitting too many small requests; you can use the
related monitor elements pool_queued_async_*_pages and pool_queued_async_*_reqs to
determine the average prefetch request size.pool_failed_async_other_reqs - Failed non-prefetch requests monitor element
The number of times an attempt to queue a non-prefetch request was made but failed. This element is for non-prefetch work done by prefetchers. One possible reason for the failed request is that the prefetch queue is full.
Event Type | Logical Data Grouping | Monitor Switch |
---|---|---|
Activities | event_activity (reported in details_xml document) event_activitymetrics |
ACTIVITY METRICS BASE |
Package Cache | pkgcache (reported in
the metrics.xml document) pkgcache_metrics |
ACTIVITY METRICS BASE |
Unit of Work | uow (reported in the metrics.xml
document) uow_metrics |
REQUEST METRICS BASE |
Statistics | event_scstats (reported in the metrics document) | REQUEST METRICS BASE |
Statistics | event_wlstats (reported in the metrics document) | REQUEST METRICS BASE |
Usage
This monitor element reports the number of requests for work that is not related to the prefetching dictated by an access plan that could not be added to the prefetch queue. Utilities like the backup utility use the prefetcher mechanism to perform their tasks, but in a way that is different from the way an access plan for an SQL statement does. A request might fail to be added to a prefetch queue because the queue is full.pool_failed_async_temp_data_reqs - Failed data prefetch requests for temporary table spaces monitor element
The number of times an attempt to queue a data prefetch request for temporary table spaces was made but failed. One possible reason is the prefetch queue is full and a request could not be obtained from the free list.
Event Type | Logical Data Grouping | Monitor Switch |
---|---|---|
Activities | event_activity (reported in details_xml document) event_activitymetrics |
ACTIVITY METRICS BASE |
Package Cache | pkgcache (reported in
the metrics.xml document) pkgcache_metrics |
ACTIVITY METRICS BASE |
Unit of Work | uow (reported in the metrics.xml
document) uow_metrics |
REQUEST METRICS BASE |
Statistics | event_scstats (reported in the metrics document) | REQUEST METRICS BASE |
Statistics | event_wlstats (reported in the metrics document) | REQUEST METRICS BASE |
Usage
This element, along with the other pool_failed_async_..._reqs elements tells you how many prefetch requests could not be added to a prefetch queue. Requests can fail to be added to the prefetch queue if the prefetch queue is too small, or if the prefetcher is running too slowly. When requests cannot be added to a prefetch queue, a database agent typically performs disk IO synchronously, which is less efficient than prefetching. You can use these elements in conjunction with other prefetcher monitor elements to determine how effectively prefetching is done on your system. For example, you can see what the percentage of requests were successfully added to the prefetch queue using a formula like the one that follows: 1 -
(
POOL_FAILED_ASYNC_DATA_REQS +
POOL_FAILED_ASYNC_INDEX_REQS +
POOL_FAILED_ASYNC_XDA_REQS +
POOL_FAILED_ASYNC_COL_REQS +
POOL_FAILED_ASYNC_TEMP_DATA_REQS +
POOL_FAILED_ASYNC_TEMP_INDEX_REQS +
POOL_FAILED_ASYNC_TEMP_XDA_REQS +
POOL_FAILED_ASYNC_TEMP_COL_REQS
)
÷
(
(
POOL_FAILED_ASYNC_DATA_REQS +
POOL_FAILED_ASYNC_INDEX_REQS +
POOL_FAILED_ASYNC_XDA_REQS +
POOL_FAILED_ASYNC_COL_REQS +
POOL_FAILED_ASYNC_TEMP_DATA_REQS +
POOL_FAILED_ASYNC_TEMP_INDEX_REQS +
POOL_FAILED_ASYNC_TEMP_XDA_REQS +
POOL_FAILED_ASYNC_TEMP_COL_REQS
)
+
(
POOL_QUEUED_ASYNC_DATA_REQS +
POOL_QUEUED_ASYNC_INDEX_REQS +
POOL_QUEUED_ASYNC_XDA_REQS +
POOL_QUEUED_ASYNC_COL_REQS +
POOL_QUEUED_ASYNC_TEMP_DATA_REQS +
POOL_QUEUED_ASYNC_TEMP_INDEX_REQS +
POOL_QUEUED_ASYNC_TEMP_XDA_REQS +
POOL_QUEUED_ASYNC_TEMP_COL_REQS
)
) × 100
This formula calculates the ratio of
successful prefetch requests to the total number of requests made.
A failed prefetch request is one that could not be added to the prefetch
queue. Requests can fail to be added to the prefetch queue if there
are a large number of requests being created, or if the prefetcher
is running too slowly due to poor configuration or poor tuning. If
the percentage of successful requests is low, this can indicate a
bottleneck in the prefetching mechanism. You might need to configure
more prefetchers by modifying the value for the configuration parameter num_ioservers.
The condition of prefetch queues being full can also be caused by
agents submitting too many small requests; you can use the related
monitor elements pool_queued_async_..._pages and pool_queued_async_..._reqs to
determine the average prefetch request size.pool_failed_async_temp_index_reqs - Failed index prefetch requests for temporary table spaces monitor element
The number of times an attempt to queue an index prefetch request for temporary table spaces was made but failed. One possible reason is the prefetch queue is full and a request could not be obtained from the free list.
Event Type | Logical Data Grouping | Monitor Switch |
---|---|---|
Activities | event_activity (reported in details_xml document) event_activitymetrics |
ACTIVITY METRICS BASE |
Package Cache | pkgcache (reported in
the metrics.xml document) pkgcache_metrics |
ACTIVITY METRICS BASE |
Unit of Work | uow (reported in the metrics.xml
document) uow_metrics |
REQUEST METRICS BASE |
Statistics | event_scstats (reported in the metrics document) | REQUEST METRICS BASE |
Statistics | event_wlstats (reported in the metrics document) | REQUEST METRICS BASE |
Usage
This element, along with the other pool_failed_async_..._reqs elements tells you how many prefetch requests could not be added to a prefetch queue. Requests can fail to be added to the prefetch queue if the prefetch queue is too small, or if the prefetcher is running too slowly. When requests cannot be added to a prefetch queue, a database agent typically performs disk IO synchronously, which is less efficient than prefetching. You can use these elements in conjunction with other prefetcher monitor elements to determine how effectively prefetching is done on your system. For example, you can see what the percentage of requests were successfully added to the prefetch queue using a formula like the one that follows: 1 -
(
POOL_FAILED_ASYNC_DATA_REQS +
POOL_FAILED_ASYNC_INDEX_REQS +
POOL_FAILED_ASYNC_XDA_REQS +
POOL_FAILED_ASYNC_COL_REQS +
POOL_FAILED_ASYNC_TEMP_DATA_REQS +
POOL_FAILED_ASYNC_TEMP_INDEX_REQS +
POOL_FAILED_ASYNC_TEMP_XDA_REQS +
POOL_FAILED_ASYNC_TEMP_COL_REQS
)
÷
(
(
POOL_FAILED_ASYNC_DATA_REQS +
POOL_FAILED_ASYNC_INDEX_REQS +
POOL_FAILED_ASYNC_XDA_REQS +
POOL_FAILED_ASYNC_COL_REQS +
POOL_FAILED_ASYNC_TEMP_DATA_REQS +
POOL_FAILED_ASYNC_TEMP_INDEX_REQS +
POOL_FAILED_ASYNC_TEMP_XDA_REQS +
POOL_FAILED_ASYNC_TEMP_COL_REQS
)
+
(
POOL_QUEUED_ASYNC_DATA_REQS +
POOL_QUEUED_ASYNC_INDEX_REQS +
POOL_QUEUED_ASYNC_XDA_REQS +
POOL_QUEUED_ASYNC_COL_REQS +
POOL_QUEUED_ASYNC_TEMP_DATA_REQS +
POOL_QUEUED_ASYNC_TEMP_INDEX_REQS +
POOL_QUEUED_ASYNC_TEMP_XDA_REQS +
POOL_QUEUED_ASYNC_TEMP_COL_REQS
)
) × 100
This formula calculates the ratio of
successful prefetch requests to the total number of requests made.
A failed prefetch request is one that could not be added to the prefetch
queue. Requests can fail to be added to the prefetch queue if there
are a large number of requests being created, or if the prefetcher
is running too slowly due to poor configuration or poor tuning. If
the percentage of successful requests is low, this can indicate a
bottleneck in the prefetching mechanism. You might need to configure
more prefetchers by modifying the value for the configuration parameter num_ioservers.
The condition of prefetch queues being full can also be caused by
agents submitting too many small requests; you can use the related
monitor elements pool_queued_async_..._pages and pool_queued_async_..._reqs to
determine the average prefetch request size.pool_failed_async_temp_xda_reqs - Failed XDA prefetch requests for temporary table spaces monitor element
The number of times an attempt to queue a XML storage object (XDA) data prefetch request for temporary table spaces was made but failed. One possible reason is the prefetch queue is full and a request could not be obtained from the free list.
Event Type | Logical Data Grouping | Monitor Switch |
---|---|---|
Activities | event_activity (reported in details_xml document) event_activitymetrics |
ACTIVITY METRICS BASE |
Package Cache | pkgcache (reported in
the metrics.xml document) pkgcache_metrics |
ACTIVITY METRICS BASE |
Unit of Work | uow (reported in the metrics.xml
document) uow_metrics |
REQUEST METRICS BASE |
Statistics | event_scstats (reported in the metrics document) | REQUEST METRICS BASE |
Statistics | event_wlstats (reported in the metrics document) | REQUEST METRICS BASE |
Usage
This element, along with the other pool_failed_async_..._reqs elements tells you how many prefetch requests could not be added to a prefetch queue. Requests can fail to be added to the prefetch queue if the prefetch queue is too small, or if the prefetcher is running too slowly. When requests cannot be added to a prefetch queue, a database agent typically performs disk IO synchronously, which is less efficient than prefetching. You can use these elements in conjunction with other prefetcher monitor elements to determine how effectively prefetching is done on your system. For example, you can see what the percentage of requests were successfully added to the prefetch queue using a formula like the one that follows: 1 -
(
POOL_FAILED_ASYNC_DATA_REQS +
POOL_FAILED_ASYNC_INDEX_REQS +
POOL_FAILED_ASYNC_XDA_REQS +
POOL_FAILED_ASYNC_COL_REQS +
POOL_FAILED_ASYNC_TEMP_DATA_REQS +
POOL_FAILED_ASYNC_TEMP_INDEX_REQS +
POOL_FAILED_ASYNC_TEMP_XDA_REQS +
POOL_FAILED_ASYNC_TEMP_COL_REQS
)
÷
(
(
POOL_FAILED_ASYNC_DATA_REQS +
POOL_FAILED_ASYNC_INDEX_REQS +
POOL_FAILED_ASYNC_XDA_REQS +
POOL_FAILED_ASYNC_COL_REQS +
POOL_FAILED_ASYNC_TEMP_DATA_REQS +
POOL_FAILED_ASYNC_TEMP_INDEX_REQS +
POOL_FAILED_ASYNC_TEMP_XDA_REQS +
POOL_FAILED_ASYNC_TEMP_COL_REQS
)
+
(
POOL_QUEUED_ASYNC_DATA_REQS +
POOL_QUEUED_ASYNC_INDEX_REQS +
POOL_QUEUED_ASYNC_XDA_REQS +
POOL_QUEUED_ASYNC_COL_REQS +
POOL_QUEUED_ASYNC_TEMP_DATA_REQS +
POOL_QUEUED_ASYNC_TEMP_INDEX_REQS +
POOL_QUEUED_ASYNC_TEMP_XDA_REQS +
POOL_QUEUED_ASYNC_TEMP_COL_REQS
)
) × 100
This formula calculates the ratio of
successful prefetch requests to the total number of requests made.
A failed prefetch request is one that could not be added to the prefetch
queue. Requests can fail to be added to the prefetch queue if there
are a large number of requests being created, or if the prefetcher
is running too slowly due to poor configuration or poor tuning. If
the percentage of successful requests is low, this can indicate a
bottleneck in the prefetching mechanism. You might need to configure
more prefetchers by modifying the value for the configuration parameter num_ioservers.
The condition of prefetch queues being full can also be caused by
agents submitting too many small requests; you can use the related
monitor elements pool_queued_async_..._pages and pool_queued_async_..._reqs to
determine the average prefetch request size.pool_failed_async_xda_reqs - Failed XDA prefetch requests monitor element
The number of times an attempt to queue an XML storage object (XDA) data prefetch request was made but failed. One possible reason is the prefetch queue is full and a request could not be obtained from the free list.
Event Type | Logical Data Grouping | Monitor Switch |
---|---|---|
Activities | event_activity (reported in details_xml document) event_activitymetrics |
ACTIVITY METRICS BASE |
Package Cache | pkgcache (reported in
the metrics.xml document) pkgcache_metrics |
ACTIVITY METRICS BASE |
Unit of Work | uow (reported in the metrics.xml
document) uow_metrics |
REQUEST METRICS BASE |
Statistics | event_scstats (reported in the metrics document) | REQUEST METRICS BASE |
Statistics | event_wlstats (reported in the metrics document) | REQUEST METRICS BASE |
Usage
This element, along with the other pool_failed_async_..._reqs elements tells you how many prefetch requests could not be added to a prefetch queue. Requests can fail to be added to the prefetch queue if the prefetch queue is too small, or if the prefetcher is running too slowly. When requests cannot be added to a prefetch queue, a database agent typically performs disk IO synchronously, which is less efficient than prefetching. You can use these elements in conjunction with other prefetcher monitor elements to determine how effectively prefetching is done on your system. For example, you can see what the percentage of requests were successfully added to the prefetch queue using a formula like the one that follows: 1 -
(
POOL_FAILED_ASYNC_DATA_REQS +
POOL_FAILED_ASYNC_INDEX_REQS +
POOL_FAILED_ASYNC_XDA_REQS +
POOL_FAILED_ASYNC_COL_REQS +
POOL_FAILED_ASYNC_TEMP_DATA_REQS +
POOL_FAILED_ASYNC_TEMP_INDEX_REQS +
POOL_FAILED_ASYNC_TEMP_XDA_REQS +
POOL_FAILED_ASYNC_TEMP_COL_REQS
)
÷
(
(
POOL_FAILED_ASYNC_DATA_REQS +
POOL_FAILED_ASYNC_INDEX_REQS +
POOL_FAILED_ASYNC_XDA_REQS +
POOL_FAILED_ASYNC_COL_REQS +
POOL_FAILED_ASYNC_TEMP_DATA_REQS +
POOL_FAILED_ASYNC_TEMP_INDEX_REQS +
POOL_FAILED_ASYNC_TEMP_XDA_REQS +
POOL_FAILED_ASYNC_TEMP_COL_REQS
)
+
(
POOL_QUEUED_ASYNC_DATA_REQS +
POOL_QUEUED_ASYNC_INDEX_REQS +
POOL_QUEUED_ASYNC_XDA_REQS +
POOL_QUEUED_ASYNC_COL_REQS +
POOL_QUEUED_ASYNC_TEMP_DATA_REQS +
POOL_QUEUED_ASYNC_TEMP_INDEX_REQS +
POOL_QUEUED_ASYNC_TEMP_XDA_REQS +
POOL_QUEUED_ASYNC_TEMP_COL_REQS
)
) × 100
This formula calculates the ratio of
successful prefetch requests to the total number of requests made.
A failed prefetch request is one that could not be added to the prefetch
queue. Requests can fail to be added to the prefetch queue if there
are a large number of requests being created, or if the prefetcher
is running too slowly due to poor configuration or poor tuning. If
the percentage of successful requests is low, this can indicate a
bottleneck in the prefetching mechanism. You might need to configure
more prefetchers by modifying the value for the configuration parameter num_ioservers.
The condition of prefetch queues being full can also be caused by
agents submitting too many small requests; you can use the related
monitor elements pool_queued_async_..._pages and pool_queued_async_..._reqs to
determine the average prefetch request size.pool_id - Memory Pool Identifier monitor element
The type of memory pool.
Snapshot Level | Logical Data Grouping | Monitor Switch |
---|---|---|
Database Manager | memory_pool | Basic |
Database | memory_pool | Basic |
Application | memory_pool | Basic |
Event Type | Logical Data Grouping | Monitor Switch |
---|---|---|
Database | event_dbmemuse | - |
Connection | event_connmemuse | - |
Usage
To track system memory usage, use this value in conjunction with pool_config_size, pool_cur_size, and pool_watermark.
API Constant | Description |
---|---|
SQLM_HEAP_APPLICATION | Application Heap |
SQLM_HEAP_DATABASE | Database Heap |
SQLM_HEAP_LOCK_MGR | Lock Manager Heap |
SQLM_HEAP_UTILITY | Backup/Restore/Utility Heap |
SQLM_HEAP_STATISTICS | Statistics Heap |
SQLM_HEAP_PACKAGE_CACHE | Package Cache Heap |
SQLM_HEAP_CAT_CACHE | Catalog Cache Heap |
SQLM_HEAP_MONITOR | Database Monitor Heap |
SQLM_HEAP_STATEMENT | Statement Heap |
SQLM_HEAP_FCMBP | FCMBP Heap |
SQLM_HEAP_IMPORT_POOL | Import Pool |
SQLM_HEAP_OTHER | Other Memory |
SQLM_HEAP_BP | Buffer Pool Heap |
SQLM_HEAP_APPL_SHARED | Applications Shared Heap |
SQLM_HEAP_SHARED_SORT | Sort Shared Heap |
pool_index_gbp_indep_pages_found_in_lbp - Group buffer pool independent index pages found in local buffer pool monitor element
The number of group buffer pool (GBP) independent index pages found in a local buffer pool (LBP) by an agent. Outside of a DB2 pureScale environment, this value is null.
Event Type | Logical Data Grouping | Monitor Switch |
---|---|---|
Activities | event_activitymetrics | ACTIVITY METRICS BASE |
Package Cache | pkgcache_metrics | ACTIVITY METRICS BASE |
Statistics | event_scmetrics event_wlmetrics |
REQUEST METRICS BASE |
Unit of Work | uow_metrics | REQUEST METRICS BASE |
pool_index_gbp_invalid_pages - Group buffer pool invalid index pages monitor element
The number of times an index page was attempted to be read from the group bufferpool because the page was invalid in the local buffer pool. Outside of a DB2 pureScale environment, this value is null.
Event Type | Logical Data Grouping | Monitor Switch |
---|---|---|
Activities | event_activity (reported in the details_xml document) | ACTIVITY METRICS BASE |
Activities | event_activitymetrics | ACTIVITY METRICS BASE |
Statistics | event_scstats (reported in the metrics document) | REQUEST METRICS BASE |
Statistics | event_wlstats (reported in the metrics document) | REQUEST METRICS BASE |
Unit of work | uow_metrics- | REQUEST METRICS BASE |
Package cache | pkgcache_metrics | ACTIVITY METRICS BASE |
Usage
(POOL_INDEX_LBP_PAGES_FOUND - POOL_ASYNC_INDEX_LBP_PAGES_FOUND) / POOL_INDEX_L_READS
To
determine how many times a requested index page was found in the group
bufferpool, use the following formula(POOL_INDEX_GBP_L_READS - POOL_INDEX_GBP_P_READS) / POOL_INDEX_GBP_L_READS
Local
buffer pool and group buffer pool hit rates are both important factors
in the overall performance of the cluster caching facility.
Using these formulas can help you determine whether the local or group
buffer pool may be a limiting factor on your database's throughput. pool_index_gbp_l_reads - Group buffer pool index logical reads monitor element
The number of times a Group Buffer Pool (GBP) dependent index page was attempted to be read from the group buffer pool because the page was either invalid or not present in the local buffer pool. Outside of a DB2 pureScale environment, this value is null.
Event Type | Logical Data Grouping | Monitor Switch |
---|---|---|
Activities | event_activity (reported in the details_xml document) | ACTIVITY METRICS BASE |
Activities | event_activitymetrics | ACTIVITY METRICS BASE |
Statistics | event_scstats (reported in the metrics document) | REQUEST METRICS BASE |
Statistics | event_wlstats (reported in the metrics document) | REQUEST METRICS BASE |
Unit of work | uow_metrics- | REQUEST METRICS BASE |
Package cache | pkgcache_metrics | ACTIVITY METRICS BASE |
Usage
(POOL_INDEX_LBP_PAGES_FOUND - POOL_ASYNC_INDEX_LBP_PAGES_FOUND) / POOL_INDEX_L_READS
To
determine how many times a requested index page was found in the group
bufferpool, use the following formula(POOL_INDEX_GBP_L_READS - POOL_INDEX_GBP_P_READS) / POOL_INDEX_GBP_L_READS
Local
buffer pool and group buffer pool hit rates are both important factors
in the overall performance of the cluster caching facility.
Using these formulas can help you determine whether the local or group
buffer pool may be a limiting factor on your database's throughput. pool_index_gbp_p_reads - Group buffer pool index physical reads monitor elements
The number of times a Group Buffer Pool (GBP) dependent index page was read into the local buffer pool from disk because it was not found in the GBP. Outside of a DB2 pureScale environment, this value is null.
Event Type | Logical Data Grouping | Monitor Switch |
---|---|---|
Activities | event_activity (reported in the details_xml document) | ACTIVITY METRICS BASE |
Activities | event_activitymetrics | ACTIVITY METRICS BASE |
Statistics | event_scstats (reported in the metrics document) | REQUEST METRICS BASE |
Statistics | event_wlstats (reported in the metrics document) | REQUEST METRICS BASE |
Unit of work | uow_metrics- | REQUEST METRICS BASE |
Package cache | pkgcache_metrics | ACTIVITY METRICS BASE |
Usage
(POOL_INDEX_LBP_PAGES_FOUND - POOL_ASYNC_INDEX_LBP_PAGES_FOUND) / POOL_INDEX_L_READS
To
determine how many times a requested index page was found in the group
bufferpool, use the following formula(POOL_INDEX_GBP_L_READS - POOL_INDEX_GBP_P_READS) / POOL_INDEX_GBP_L_READS
Local
buffer pool and group buffer pool hit rates are both important factors
in the overall performance of the cluster caching facility.
Using these formulas can help you determine whether the local or group
buffer pool may be a limiting factor on your database's throughput. pool_index_lbp_pages_found - Local buffer pool index pages found monitor element
The number of times an index page was present in the local buffer pool.
Event Type | Logical Data Grouping | Monitor Switch |
---|---|---|
Activities | event_activity (reported in the details_xml document) | ACTIVITY METRICS BASE |
Activities | event_activitymetrics | ACTIVITY METRICS BASE |
Statistics | event_scstats (reported in the metrics document) | REQUEST METRICS BASE |
Statistics | event_wlstats (reported in the metrics document) | REQUEST METRICS BASE |
Unit of work | uow_metrics- | REQUEST METRICS BASE |
Package cache | pkgcache_metrics | ACTIVITY METRICS BASE |
Usage
(POOL_INDEX_LBP_PAGES_FOUND - POOL_ASYNC_INDEX_LBP_PAGES_FOUND) / POOL_INDEX_L_READS
To
determine how many times a requested index page was found in the group
bufferpool, use the following formula(POOL_INDEX_GBP_L_READS - POOL_INDEX_GBP_P_READS) / POOL_INDEX_GBP_L_READS
Local
buffer pool and group buffer pool hit rates are both important factors
in the overall performance of the cluster caching facility.
Using these formulas can help you determine whether the local or group
buffer pool may be a limiting factor on your database's throughput. pool_index_l_reads - Buffer pool index logical reads monitor element
Indicates the number of index pages which have been requested from the buffer pool (logical) for regular and large table spaces.
Snapshot Level | Logical Data Grouping | Monitor Switch |
---|---|---|
Database | dbase | Buffer Pool |
Table Space | tablespace | Buffer Pool |
Buffer Pool | bufferpool | Buffer Pool |
Application | appl | Buffer Pool |
Application | stmt | Buffer Pool |
Dynamic SQL | dynsql | Buffer Pool, Statement |
Event Type | Logical Data Grouping | Monitor Switch |
---|---|---|
Activities | event_activity (reported in the details_xml document) | ACTIVITY METRICS BASE |
Activities | event_activitymetrics | ACTIVITY METRICS BASE |
Statistics | event_scstats (reported in the metrics document) | REQUEST METRICS BASE |
Statistics | event_wlstats (reported in the metrics document) | REQUEST METRICS BASE |
Unit of work | Reported in the system_metrics document. | REQUEST METRICS BASE |
Database | event_db | Always collected |
Tablespaces | event_tablespace | Always collected |
Connection | event_conn | Always collected |
Statement | event_stmt | Always collected |
Activities | event_activity | Buffer Pool, Statement |
Package cache | Reported in the activity_metrics document. | ACTIVITY METRICS BASE |
Usage
- Already in the buffer pool when the database manager needs to process the page
- Read into the buffer pool before the database manager can process the page.
((pool_index_lbp_pages_found
- pool_async_index_lbp_pages_found ) / (pool_index_l_reads
+ pool_temp_index_l_reads)) × 100
For more information, see Formulas for calculating buffer pool hit ratios. If the hit ratio is low, increasing the number of buffer pool pages may improve performance.
pool_index_p_reads - Buffer pool index physical reads monitor element
Indicates the number of index pages read in from the table space containers (physical) for regular and large table spaces.
Snapshot Level | Logical Data Grouping | Monitor Switch |
---|---|---|
Database | dbase | Buffer Pool |
Table Space | tablespace | Buffer Pool |
Buffer Pool | bufferpool | Buffer Pool |
Application | appl | Buffer Pool |
Application | stmt | Buffer Pool |
Dynamic SQL | dynsql | Buffer Pool, Statement |
Event Type | Logical Data Grouping | Monitor Switch |
---|---|---|
Activities | event_activity (reported in the details_xml document) | ACTIVITY METRICS BASE |
Activities | event_activitymetrics | ACTIVITY METRICS BASE |
Statistics | event_scstats (reported in the metrics document) | REQUEST METRICS BASE |
Statistics | event_wlstats (reported in the metrics document) | REQUEST METRICS BASE |
Unit of work | Reported in the system_metrics document. | REQUEST METRICS BASE |
Database | event_db | Always collected |
Tablespaces | event_tablespace | Always collected |
Connection | event_conn | Always collected |
Statement | event_stmt | Always collected |
Activities | event_activity | Buffer Pool, Statement |
Package cache | Reported in the activity_metrics document. | ACTIVITY METRICS BASE |
Usage
((pool_index_lbp_pages_found
- pool_async_index_lbp_pages_found ) / (pool_index_l_reads
+ pool_temp_index_l_reads)) × 100
For more information, see Formulas for calculating buffer pool hit ratios. pool_index_writes - Buffer pool index writes monitor element
Indicates the number of times a buffer pool index page was physically written to disk.
Snapshot Level | Logical Data Grouping | Monitor Switch |
---|---|---|
Database | dbase | Buffer Pool |
Table Space | tablespace | Buffer Pool |
Buffer Pool | bufferpool | Buffer Pool |
Application | appl | Buffer Pool |
Event Type | Logical Data Grouping | Monitor Switch |
---|---|---|
Activities | event_activity (reported in the details_xml document) | ACTIVITY METRICS BASE |
Activities | event_activitymetrics | ACTIVITY METRICS BASE |
Statistics | event_scstats (reported in the metrics document) | REQUEST METRICS BASE |
Statistics | event_wlstats (reported in the metrics document) | REQUEST METRICS BASE |
Unit of work | Reported in the system_metrics document. | REQUEST METRICS BASE |
Database | event_db | Always collected |
Tablespaces | event_tablespace | Always collected |
Connection | event_conn | Always collected |
Package cache | Reported in the activity_metrics document. | ACTIVITY METRICS BASE |
Usage
- To free a page in the buffer pool so another page can be read
- To flush the buffer pool
The system does not always write a page to make room for a new one. If the page has not been updated, it can simply be replaced. This replacement is not counted for this element.
The index page can be written by an asynchronous page-cleaner agent before the buffer pool space is required. These asynchronous index page writes are included in the value of this element in addition to synchronous index page writes (see the pool_async_index_writes monitor element).
If a buffer pool index page is written to disk for a high percentage of the value of the pool_index_p_reads monitor element, you may be able to improve performance by increasing the number of buffer pool pages available for the database.
- Run your application (to load the buffer).
- Note the value of this element.
- Run your application again.
- Subtract the value recorded in step 2 from the new value of this element.
- Activate the database with the ACTIVATE DATABASE command.
- Have an idle application connected to the database.
If all applications are updating the database, increasing the size of the buffer pool may not have much impact on performance, since most of the pages contain updated data which must be written to disk.
pool_lsn_gap_clns - Buffer pool log space cleaners triggered monitor element
The number of times a page cleaner was invoked because the logging space used had reached a predefined criterion for the database.
Table Function | Monitor Element Collection Command and Level |
---|---|
MON_GET_BUFFERPOOL table function - Get buffer pool metrics | DATA OBJECT METRICS BASE |
MON_GET_DATABASE table function - Get database level information | DATA OBJECT METRICS BASE |
MON_GET_DATABASE_DETAILS table function - Get database information metrics | DATA OBJECT METRICS BASE |
Snapshot Level | Logical Data Grouping | Monitor Switch |
---|---|---|
Database | dbase | Buffer Pool |
Event Type | Logical Data Grouping | Monitor Switch |
---|---|---|
Database | event_db | Always collected |
Usage
This element can be used to help evaluate whether you have enough space for logging, and whether you need more log files or larger log files.
The page cleaning criterion is determined by the setting for the page_age_trgt_mcr configuration parameter. Page cleaners are triggered when the oldest page in the buffer pool exceeds the configured time for the page_age_trgt_mcr configuration parameter.
- The pool_lsn_gap_clns monitor element is inserted into the monitor stream.
- Page cleaners are triggered if the oldest page in the buffer pool contains an update described by a log record that is older than the current log position by the criterion value.
- The pool_lsn_gap_clns monitor element inserts 0 into the monitor stream.
- Page cleaners write pages proactively instead of waiting to be triggered by the criterion value.
pool_no_victim_buffer - Buffer pool no victim buffers monitor element
Number of times an agent did not have a preselected victim buffer available.
Table Function | Monitor Element Collection Level |
---|---|
MON_GET_BUFFERPOOL table function - Get buffer pool metrics | DATA OBJECT METRICS BASE |
MON_GET_DATABASE table function - Get database level information | DATA OBJECT METRICS BASE |
MON_GET_DATABASE_DETAILS table function - Get database information metrics | DATA OBJECT METRICS BASE |
Snapshot Level | Logical Data Grouping | Monitor Switch |
---|---|---|
Database | dbase | Buffer Pool |
Tablespace | tablespace | Buffer Pool |
Buffer Pool | bufferpool | Buffer Pool |
Event Type | Logical Data Grouping | Monitor Switch |
---|---|---|
Database | event_db | Always collected |
Tablespace | event_tablespace | Always collected |
- Usage
- This element can be used to help evaluate whether you have enough
page cleaners for a given buffer pool when using proactive page cleaning.
When the DB2_USE_ALTERNATE_PAGE_CLEANING registry variable is ON, the pool_no_victim_buffer element counts the number of times that an agent did not find a preselected victim buffer available for immediate use, and was forced to search the buffer pool for a suitable victim buffer.
If the value of pool_no_victim_buffer element is high relative to the number of logical reads in the buffer pool, then the DB2 database system is having difficulty ensuring that sufficient numbers of good victims are available for use. Increasing the number of page cleaners will increase the ability of DB2 to provide preselected victim buffers.
When the DB2_USE_ALTERNATE_PAGE_CLEANING registry variable is OFF, the pool_no_victim_buffer element has no predictive value, and can be safely ignored. In this configuration, the DB2 database system does not attempt to ensure that agents have preselected victim buffers available to them, so most accesses to the buffer pool will require that the agent search the buffer pool to find a victim buffer.
pool_queued_async_col_pages - Column-organized page prefetch requests monitor element
The number of column-organized pages successfully requested for prefetching.
Event type | Logical data grouping | Monitor switch |
---|---|---|
Activities | event_activity (reported in the details_xml document) event_activitymetrics | ACTIVITY METRICS BASE |
Package Cache | pkgcache (reported in the metrics.xml document) pkgcache_metrics | ACTIVITY METRICS BASE |
Unit of Work | uow (reported in the metrics.xml document) uow_metrics | REQUEST METRICS BASE |
Statistics | event_scstats (reported in the metrics document) | REQUEST METRICS BASE |
Statistics | event_wlstats (reported in the metrics document) | REQUEST METRICS BASE |
pool_queued_async_col_reqs - Column-organized prefetch requests monitor element
The number of column-organized prefetch requests successfully added to the prefetch queue.
Event type | Logical data grouping | Monitor switch |
---|---|---|
Activities | event_activitymetrics | ACTIVITY METRICS BASE |
Package Cache | pkgcache_metrics | ACTIVITY METRICS BASE |
Unit of Work | uow_metrics | REQUEST METRICS BASE |
Statistics | event_scmetrics | REQUEST METRICS BASE |
Statistics | event_wlmetrics | REQUEST METRICS BASE |
pool_queued_async_data_pages - Data pages prefetch requests monitor element
The number of data pages successfully requested for prefetching.
Event Type | Logical Data Grouping | Monitor Switch |
---|---|---|
Activities | event_activity (reported in details_xml document) event_activitymetrics |
ACTIVITY METRICS BASE |
Package Cache | pkgcache (reported in
the metrics.xml document) pkgcache_metrics |
ACTIVITY METRICS BASE |
Unit of Work | uow (reported in the metrics.xml
document) uow_metrics |
REQUEST METRICS BASE |
Statistics | event_scstats (reported in the metrics document) | REQUEST METRICS BASE |
Statistics | event_wlstats (reported in the metrics document) | REQUEST METRICS BASE |
Usage
This monitor element, along with the other pool_queued_async_..._pages elements tells you how many pages of data were retrieved by prefetch requests. You can use this information to determine whether prefetch requests are being performed efficiently on your system. For example, you can calculate the average number of pages per prefetch request using a formula like the one that follows: (POOL_QUEUED_ASYNC_DATA_PAGES +
POOL_QUEUED_ASYNC_INDEX_PAGES +
POOL_QUEUED_ASYNC_XDA_PAGES +
POOL_QUEUED_ASYNC_COL_PAGES +
POOL_QUEUED_ASYNC_TEMP_DATA_PAGES +
POOL_QUEUED_ASYNC_TEMP_INDEX_PAGES +
POOL_QUEUED_ASYNC_TEMP_XDA_PAGES +
POOL_QUEUED_ASYNC_TEMP_COL_PAGES)
÷
(POOL_QUEUED_ASYNC_DATA_REQS +
POOL_QUEUED_ASYNC_INDEX_REQS +
POOL_QUEUED_ASYNC_XDA_REQS +
POOL_QUEUED_ASYNC_COL_REQS +
POOL_QUEUED_ASYNC_TEMP_DATA_REQS +
POOL_QUEUED_ASYNC_TEMP_INDEX_REQS +
POOL_QUEUED_ASYNC_TEMP_XDA_REQS +
POOL_QUEUED_ASYNC_TEMP_COL_REQS)
If the average number
of pages per request is low, and there is a significant amount of
prefetching on the system, then your system might be performing more
IO operations than necessary. In general, request size is based on
prefetch size, which should be at least as large as extent size.
So a small average request size might indicate that prefetch size
is set too low, and that increasing prefetch size to a multiple of
extent size may improve performance. Also note that a small average
request size may mean that the prefetch queues fill up too quickly,
so it is worthwhile to also monitor the associated pool_failed_async_..._reqs monitor
elementpool_queued_async_data_reqs - Data prefetch requests monitor element
The number of data prefetch requests successfully added to the prefetch queue.
Event Type | Logical Data Grouping | Monitor Switch |
---|---|---|
Activities | event_activity (reported in details_xml document) event_activitymetrics |
ACTIVITY METRICS BASE |
Package Cache | pkgcache (reported in
the metrics.xml document) pkgcache_metrics |
ACTIVITY METRICS BASE |
Unit of Work | uow (reported in the metrics.xml
document) uow_metrics |
REQUEST METRICS BASE |
Statistics | event_scstats (reported in the metrics document) | REQUEST METRICS BASE |
Statistics | event_wlstats (reported in the metrics document) | REQUEST METRICS BASE |
Usage
This element, along with the other pool_queued_async_*_reqs elements tells you how many prefetch requests have been added to the prefetch queue. You can use this information to see how often the database manager does prefetching. You can use these elements in conjunction with other prefetcher monitor elements to determine how effectively prefetching is done on your system. For example, you can see what the percentage of requests were successfully added to the prefetch queue using a formula like the one that follows: 1 -
(
POOL_FAILED_ASYNC_DATA_REQS +
POOL_FAILED_ASYNC_INDEX_REQS +
POOL_FAILED_ASYNC_XDA_REQS +
POOL_FAILED_ASYNC_COL_REQS +
POOL_FAILED_ASYNC_TEMP_DATA_REQS +
POOL_FAILED_ASYNC_TEMP_INDEX_REQS +
POOL_FAILED_ASYNC_TEMP_XDA_REQS +
POOL_FAILED_ASYNC_TEMP_COL_REQS )
÷
(
(
POOL_FAILED_ASYNC_DATA_REQS +
POOL_FAILED_ASYNC_INDEX_REQS +
POOL_FAILED_ASYNC_XDA_REQS +
POOL_FAILED_ASYNC_COL_REQS +
POOL_FAILED_ASYNC_TEMP_DATA_REQS +
POOL_FAILED_ASYNC_TEMP_INDEX_REQS +
POOL_FAILED_ASYNC_TEMP_XDA_REQS +
POOL_FAILED_ASYNC_TEMP_COL_REQS )
+
(
POOL_QUEUED_ASYNC_DATA_REQS +
POOL_QUEUED_ASYNC_INDEX_REQS +
POOL_QUEUED_ASYNC_XDA_REQS +
POOL_QUEUED_ASYNC_COL_REQS +
POOL_QUEUED_ASYNC_TEMP_DATA_REQS +
POOL_QUEUED_ASYNC_TEMP_INDEX_REQS +
POOL_QUEUED_ASYNC_TEMP_XDA_REQS +
POOL_QUEUED_ASYNC_TEMP_COL_REQS )
) * 100
This formula calculates the ratio of failed
prefetch requests to the total number of requests made. A failed prefetch
request is one that could not be added to the prefetch queue. If the
percentage is low, you might need to configure more prefetchers by
modifying the num_ioservers configuration parameter.pool_queued_async_index_pages - Index pages prefetch requests monitor element
The number of index pages successfully requested for prefetching.
Event Type | Logical Data Grouping | Monitor Switch |
---|---|---|
Activities | event_activity (reported in details_xml document) event_activitymetrics |
ACTIVITY METRICS BASE |
Package Cache | pkgcache (reported in
the metrics.xml document) pkgcache_metrics |
ACTIVITY METRICS BASE |
Unit of Work | uow (reported in the metrics.xml
document) uow_metrics |
REQUEST METRICS BASE |
Statistics | event_scstats (reported in the metrics document) | REQUEST METRICS BASE |
Statistics | event_wlstats (reported in the metrics document) | REQUEST METRICS BASE |
Usage
This monitor element, along with the other pool_queued_async_..._pages elements tells you how many pages of data were retrieved by prefetch requests. You can use this information to determine whether prefetch requests are being performed efficiently on your system. For example, you can calculate the average number of pages per prefetch request using a formula like the one that follows: (POOL_QUEUED_ASYNC_DATA_PAGES +
POOL_QUEUED_ASYNC_INDEX_PAGES +
POOL_QUEUED_ASYNC_XDA_PAGES +
POOL_QUEUED_ASYNC_COL_PAGES +
POOL_QUEUED_ASYNC_TEMP_DATA_PAGES +
POOL_QUEUED_ASYNC_TEMP_INDEX_PAGES +
POOL_QUEUED_ASYNC_TEMP_XDA_PAGES +
POOL_QUEUED_ASYNC_TEMP_COL_PAGES)
÷
(POOL_QUEUED_ASYNC_DATA_REQS +
POOL_QUEUED_ASYNC_INDEX_REQS +
POOL_QUEUED_ASYNC_XDA_REQS +
POOL_QUEUED_ASYNC_COL_REQS +
POOL_QUEUED_ASYNC_TEMP_DATA_REQS +
POOL_QUEUED_ASYNC_TEMP_INDEX_REQS +
POOL_QUEUED_ASYNC_TEMP_XDA_REQS +
POOL_QUEUED_ASYNC_TEMP_COL_REQS)
If the average number
of pages per request is low, and there is a significant amount of
prefetching on the system, then your system might be performing more
IO operations than necessary. In general, request size is based on
prefetch size, which should be at least as large as extent size.
So a small average request size might indicate that prefetch size
is set too low, and that increasing prefetch size to a multiple of
extent size may improve performance. Also note that a small average
request size may mean that the prefetch queues fill up too quickly,
so it is worthwhile to also monitor the associated pool_failed_async_..._reqs monitor
elementpool_queued_async_index_reqs - Index prefetch requests monitor element
The number of index prefetch requests successfully added to the prefetch queue.
Event Type | Logical Data Grouping | Monitor Switch |
---|---|---|
Activities | event_activity (reported in details_xml document) event_activitymetrics |
ACTIVITY METRICS BASE |
Package Cache | pkgcache (reported in
the metrics.xml document) pkgcache_metrics |
ACTIVITY METRICS BASE |
Unit of Work | uow (reported in the metrics.xml
document) uow_metrics |
REQUEST METRICS BASE |
Statistics | event_scstats (reported in the metrics document) | REQUEST METRICS BASE |
Statistics | event_wlstats (reported in the metrics document) | REQUEST METRICS BASE |
Usage
This element, along with the other pool_queued_async_*_reqs elements tells you how many prefetch requests have been added to the prefetch queue. You can use this information to see how often the database manager does prefetching. You can use these elements in conjunction with other prefetcher monitor elements to determine how effectively prefetching is done on your system. For example, you can see what the percentage of requests were successfully added to the prefetch queue using a formula like the one that follows: 1 -
(
POOL_FAILED_ASYNC_DATA_REQS +
POOL_FAILED_ASYNC_INDEX_REQS +
POOL_FAILED_ASYNC_XDA_REQS +
POOL_FAILED_ASYNC_COL_REQS +
POOL_FAILED_ASYNC_TEMP_DATA_REQS +
POOL_FAILED_ASYNC_TEMP_INDEX_REQS +
POOL_FAILED_ASYNC_TEMP_XDA_REQS +
POOL_FAILED_ASYNC_TEMP_COL_REQS )
÷
(
(
POOL_FAILED_ASYNC_DATA_REQS +
POOL_FAILED_ASYNC_INDEX_REQS +
POOL_FAILED_ASYNC_XDA_REQS +
POOL_FAILED_ASYNC_COL_REQS +
POOL_FAILED_ASYNC_TEMP_DATA_REQS +
POOL_FAILED_ASYNC_TEMP_INDEX_REQS +
POOL_FAILED_ASYNC_TEMP_XDA_REQS +
POOL_FAILED_ASYNC_TEMP_COL_REQS )
+
(
POOL_QUEUED_ASYNC_DATA_REQS +
POOL_QUEUED_ASYNC_INDEX_REQS +
POOL_QUEUED_ASYNC_XDA_REQS +
POOL_QUEUED_ASYNC_COL_REQS +
POOL_QUEUED_ASYNC_TEMP_DATA_REQS +
POOL_QUEUED_ASYNC_TEMP_INDEX_REQS +
POOL_QUEUED_ASYNC_TEMP_XDA_REQS +
POOL_QUEUED_ASYNC_TEMP_COL_REQS )
) * 100
This formula calculates the ratio of failed
prefetch requests to the total number of requests made. A failed prefetch
request is one that could not be added to the prefetch queue. If the
percentage is low, you might need to configure more prefetchers by
modifying the num_ioservers configuration parameter.pool_queued_async_other_reqs - Other requests handled by prefetchers monitor element
The number of requests for non-prefetch work successfully added to the prefetch queue. This is for other work done by prefetchers.
Event Type | Logical Data Grouping | Monitor Switch |
---|---|---|
Activities | event_activity (reported in details_xml document) event_activitymetrics |
ACTIVITY METRICS BASE |
Package Cache | pkgcache (reported in
the metrics.xml document) pkgcache_metrics |
ACTIVITY METRICS BASE |
Unit of Work | uow (reported in the metrics.xml
document) uow_metrics |
REQUEST METRICS BASE |
Statistics | event_scstats (reported in the metrics document) | REQUEST METRICS BASE |
Statistics | event_wlstats (reported in the metrics document) | REQUEST METRICS BASE |
Usage
This monitor element reports the number of requests added to the prefetch queue for IO work that is not related to the prefetching dictated by an access plan. Utilities like the backup utility use the prefetcher mechanism to perform their tasks, but in a way that is different from the way an access plan for an SQL statement does.pool_queued_async_temp_col_pages - Column-organized page temporary prefetch requests monitor element
The number of column-organized pages for temporary table spaces successfully requested for prefetching.
Event type | Logical data grouping | Monitor switch |
---|---|---|
Activities | event_activity (reported in the details_xml document) event_activitymetrics | ACTIVITY METRICS BASE |
Package Cache | pkgcache (reported in the metrics.xml document) pkgcache_metrics | ACTIVITY METRICS BASE |
Unit of Work | uow (reported in the metrics.xml document) uow_metrics | REQUEST METRICS BASE |
Statistics | event_scstats (reported in the metrics document) | REQUEST METRICS BASE |
Statistics | event_wlstats (reported in the metrics document) | REQUEST METRICS BASE |
Usage
(pool_queued_async_col_pages +
pool_queued_async_data_pages +
pool_queued_async_index_pages +
pool_queued_async_xda_pages +
pool_queued_async_temp_col_pages +
pool_queued_async_temp_data_pages +
pool_queued_async_temp_index_pages +
pool_queued_async_temp_xda_pages)
/
(pool_queued_async_col_reqs +
pool_queued_async_data_reqs +
pool_queued_async_index_reqs +
pool_queued_async_xda_reqs +
pool_queued_async_temp_col_reqs +
pool_queued_async_temp_data_reqs +
pool_queued_async_temp_index_reqs +
pool_queued_async_temp_xda_reqs)
If the average number
of pages per request is low, and there is a significant amount of
prefetching on the system, then your system might be performing more
IO operations than necessary. In general, request size is based on
prefetch size, which should be at least as large as extent size. So
a small average request size might indicate that prefetch size is
set too low, and that increasing prefetch size to a multiple of extent
size may improve performance. Also note that a small average request
size may mean that the prefetch queues fill up too quickly, so it
is worthwhile to also monitor the associated pool_failed_async_*_reqs monitor
element. pool_queued_async_temp_col_reqs - Column-organized temporary prefetch requests monitor element
The number of column-organized prefetch requests for temporary table spaces successfully added to the prefetch queue.
Event type | Logical data grouping | Monitor switch |
---|---|---|
Activities | event_activity (reported in the details_xml document) event_activitymetrics | ACTIVITY METRICS BASE |
Package Cache | pkgcache (reported in the metrics.xml document) pkgcache_metrics | ACTIVITY METRICS BASE |
Unit of Work | uow (reported in the metrics.xml document) uow_metrics | REQUEST METRICS BASE |
Statistics | event_scstats (reported in the metrics document) | REQUEST METRICS BASE |
Statistics | event_wlstats (reported in the metrics document) | REQUEST METRICS BASE |
pool_queued_async_temp_data_pages - Data pages prefetch requests for temporary table spaces monitor element
The number of data pages for temporary table spaces successfully requested for prefetching.
Event Type | Logical Data Grouping | Monitor Switch |
---|---|---|
Activities | event_activity (reported in details_xml document) event_activitymetrics |
ACTIVITY METRICS BASE |
Package Cache | pkgcache (reported in
the metrics.xml document) pkgcache_metrics |
ACTIVITY METRICS BASE |
Unit of Work | uow (reported in the metrics.xml
document) uow_metrics |
REQUEST METRICS BASE |
Statistics | event_scstats (reported in the metrics document) | REQUEST METRICS BASE |
Statistics | event_wlstats (reported in the metrics document) | REQUEST METRICS BASE |
Usage
This monitor element, along with the other pool_queued_async_..._pages elements tells you how many pages of data were retrieved by prefetch requests. You can use this information to determine whether prefetch requests are being performed efficiently on your system. For example, you can calculate the average number of pages per prefetch request using a formula like the one that follows: (POOL_QUEUED_ASYNC_DATA_PAGES +
POOL_QUEUED_ASYNC_INDEX_PAGES +
POOL_QUEUED_ASYNC_XDA_PAGES +
POOL_QUEUED_ASYNC_COL_PAGES +
POOL_QUEUED_ASYNC_TEMP_DATA_PAGES +
POOL_QUEUED_ASYNC_TEMP_INDEX_PAGES +
POOL_QUEUED_ASYNC_TEMP_XDA_PAGES +
POOL_QUEUED_ASYNC_TEMP_COL_PAGES)
÷
(POOL_QUEUED_ASYNC_DATA_REQS +
POOL_QUEUED_ASYNC_INDEX_REQS +
POOL_QUEUED_ASYNC_XDA_REQS +
POOL_QUEUED_ASYNC_COL_REQS +
POOL_QUEUED_ASYNC_TEMP_DATA_REQS +
POOL_QUEUED_ASYNC_TEMP_INDEX_REQS +
POOL_QUEUED_ASYNC_TEMP_XDA_REQS +
POOL_QUEUED_ASYNC_TEMP_COL_REQS)
If the average number
of pages per request is low, and there is a significant amount of
prefetching on the system, then your system might be performing more
IO operations than necessary. In general, request size is based on
prefetch size, which should be at least as large as extent size.
So a small average request size might indicate that prefetch size
is set too low, and that increasing prefetch size to a multiple of
extent size may improve performance. Also note that a small average
request size may mean that the prefetch queues fill up too quickly,
so it is worthwhile to also monitor the associated pool_failed_async_..._reqs monitor
elementpool_queued_async_temp_data_reqs - Data prefetch requests for temporary table spaces monitor element
The number of data prefetch requests for temporary table spaces successfully added to the prefetch queue.
Event Type | Logical Data Grouping | Monitor Switch |
---|---|---|
Activities | event_activity (reported in details_xml document) event_activitymetrics |
ACTIVITY METRICS BASE |
Package Cache | pkgcache (reported in
the metrics.xml document) pkgcache_metrics |
ACTIVITY METRICS BASE |
Unit of Work | uow (reported in the metrics.xml
document) uow_metrics |
REQUEST METRICS BASE |
Statistics | event_scstats (reported in the metrics document) | REQUEST METRICS BASE |
Statistics | event_wlstats (reported in the metrics document) | REQUEST METRICS BASE |
Usage
This element, along with the other pool_queued_async_*_reqs elements tells you how many prefetch requests have been added to the prefetch queue. You can use this information to see how often the database manager does prefetching. You can use these elements in conjunction with other prefetcher monitor elements to determine how effectively prefetching is done on your system. For example, you can see what the percentage of requests were successfully added to the prefetch queue using a formula like the one that follows: 1 -
(
POOL_FAILED_ASYNC_DATA_REQS +
POOL_FAILED_ASYNC_INDEX_REQS +
POOL_FAILED_ASYNC_XDA_REQS +
POOL_FAILED_ASYNC_COL_REQS +
POOL_FAILED_ASYNC_TEMP_DATA_REQS +
POOL_FAILED_ASYNC_TEMP_INDEX_REQS +
POOL_FAILED_ASYNC_TEMP_XDA_REQS +
POOL_FAILED_ASYNC_TEMP_COL_REQS )
÷
(
(
POOL_FAILED_ASYNC_DATA_REQS +
POOL_FAILED_ASYNC_INDEX_REQS +
POOL_FAILED_ASYNC_XDA_REQS +
POOL_FAILED_ASYNC_COL_REQS +
POOL_FAILED_ASYNC_TEMP_DATA_REQS +
POOL_FAILED_ASYNC_TEMP_INDEX_REQS +
POOL_FAILED_ASYNC_TEMP_XDA_REQS +
POOL_FAILED_ASYNC_TEMP_COL_REQS )
+
(
POOL_QUEUED_ASYNC_DATA_REQS +
POOL_QUEUED_ASYNC_INDEX_REQS +
POOL_QUEUED_ASYNC_XDA_REQS +
POOL_QUEUED_ASYNC_COL_REQS +
POOL_QUEUED_ASYNC_TEMP_DATA_REQS +
POOL_QUEUED_ASYNC_TEMP_INDEX_REQS +
POOL_QUEUED_ASYNC_TEMP_XDA_REQS +
POOL_QUEUED_ASYNC_TEMP_COL_REQS )
) * 100
This formula calculates the ratio of failed
prefetch requests to the total number of requests made. A failed prefetch
request is one that could not be added to the prefetch queue. If the
percentage is low, you might need to configure more prefetchers by
modifying the num_ioservers configuration parameter.pool_queued_async_temp_index_pages - Index pages prefetch requests for temporary table spaces monitor element
The number of index pages for temporary table spaces successfully requested for prefetching.
Event Type | Logical Data Grouping | Monitor Switch |
---|---|---|
Activities | event_activity (reported in details_xml document) event_activitymetrics |
ACTIVITY METRICS BASE |
Package Cache | pkgcache (reported in
the metrics.xml document) pkgcache_metrics |
ACTIVITY METRICS BASE |
Unit of Work | uow (reported in the metrics.xml
document) uow_metrics |
REQUEST METRICS BASE |
Statistics | event_scstats (reported in the metrics document) | REQUEST METRICS BASE |
Statistics | event_wlstats (reported in the metrics document) | REQUEST METRICS BASE |
Usage
This monitor element, along with the other pool_queued_async_..._pages elements tells you how many pages of data were retrieved by prefetch requests. You can use this information to determine whether prefetch requests are being performed efficiently on your system. For example, you can calculate the average number of pages per prefetch request using a formula like the one that follows: (POOL_QUEUED_ASYNC_DATA_PAGES +
POOL_QUEUED_ASYNC_INDEX_PAGES +
POOL_QUEUED_ASYNC_XDA_PAGES +
POOL_QUEUED_ASYNC_COL_PAGES +
POOL_QUEUED_ASYNC_TEMP_DATA_PAGES +
POOL_QUEUED_ASYNC_TEMP_INDEX_PAGES +
POOL_QUEUED_ASYNC_TEMP_XDA_PAGES +
POOL_QUEUED_ASYNC_TEMP_COL_PAGES)
÷
(POOL_QUEUED_ASYNC_DATA_REQS +
POOL_QUEUED_ASYNC_INDEX_REQS +
POOL_QUEUED_ASYNC_XDA_REQS +
POOL_QUEUED_ASYNC_COL_REQS +
POOL_QUEUED_ASYNC_TEMP_DATA_REQS +
POOL_QUEUED_ASYNC_TEMP_INDEX_REQS +
POOL_QUEUED_ASYNC_TEMP_XDA_REQS +
POOL_QUEUED_ASYNC_TEMP_COL_REQS)
If the average number
of pages per request is low, and there is a significant amount of
prefetching on the system, then your system might be performing more
IO operations than necessary. In general, request size is based on
prefetch size, which should be at least as large as extent size.
So a small average request size might indicate that prefetch size
is set too low, and that increasing prefetch size to a multiple of
extent size may improve performance. Also note that a small average
request size may mean that the prefetch queues fill up too quickly,
so it is worthwhile to also monitor the associated pool_failed_async_..._reqs monitor
elementpool_queued_async_temp_index_reqs - Index prefetch requests for temporary table spaces monitor element
The number of index prefetch requests for temporary table spaces successfully added to the prefetch queue.
Event Type | Logical Data Grouping | Monitor Switch |
---|---|---|
Activities | event_activity (reported in details_xml document) event_activitymetrics |
ACTIVITY METRICS BASE |
Package Cache | pkgcache (reported in
the metrics.xml document) pkgcache_metrics |
ACTIVITY METRICS BASE |
Unit of Work | uow (reported in the metrics.xml
document) uow_metrics |
REQUEST METRICS BASE |
Statistics | event_scstats (reported in the metrics document) | REQUEST METRICS BASE |
Statistics | event_wlstats (reported in the metrics document) | REQUEST METRICS BASE |
Usage
This element, along with the other pool_queued_async_*_reqs elements tells you how many prefetch requests have been added to the prefetch queue. You can use this information to see how often the database manager does prefetching. You can use these elements in conjunction with other prefetcher monitor elements to determine how effectively prefetching is done on your system. For example, you can see what the percentage of requests were successfully added to the prefetch queue using a formula like the one that follows: 1 -
(
POOL_FAILED_ASYNC_DATA_REQS +
POOL_FAILED_ASYNC_INDEX_REQS +
POOL_FAILED_ASYNC_XDA_REQS +
POOL_FAILED_ASYNC_COL_REQS +
POOL_FAILED_ASYNC_TEMP_DATA_REQS +
POOL_FAILED_ASYNC_TEMP_INDEX_REQS +
POOL_FAILED_ASYNC_TEMP_XDA_REQS +
POOL_FAILED_ASYNC_TEMP_COL_REQS )
÷
(
(
POOL_FAILED_ASYNC_DATA_REQS +
POOL_FAILED_ASYNC_INDEX_REQS +
POOL_FAILED_ASYNC_XDA_REQS +
POOL_FAILED_ASYNC_COL_REQS +
POOL_FAILED_ASYNC_TEMP_DATA_REQS +
POOL_FAILED_ASYNC_TEMP_INDEX_REQS +
POOL_FAILED_ASYNC_TEMP_XDA_REQS +
POOL_FAILED_ASYNC_TEMP_COL_REQS )
+
(
POOL_QUEUED_ASYNC_DATA_REQS +
POOL_QUEUED_ASYNC_INDEX_REQS +
POOL_QUEUED_ASYNC_XDA_REQS +
POOL_QUEUED_ASYNC_COL_REQS +
POOL_QUEUED_ASYNC_TEMP_DATA_REQS +
POOL_QUEUED_ASYNC_TEMP_INDEX_REQS +
POOL_QUEUED_ASYNC_TEMP_XDA_REQS +
POOL_QUEUED_ASYNC_TEMP_COL_REQS )
) * 100
This formula calculates the ratio of failed
prefetch requests to the total number of requests made. A failed prefetch
request is one that could not be added to the prefetch queue. If the
percentage is low, you might need to configure more prefetchers by
modifying the num_ioservers configuration parameter.pool_queued_async_temp_xda_pages - XDA data pages prefetch requests for temporary table spaces monitor element
The number of XML storage object (XDA) data pages for temporary table spaces successfully requested for prefetching.
Event Type | Logical Data Grouping | Monitor Switch |
---|---|---|
Activities | event_activity (reported in details_xml document) event_activitymetrics |
ACTIVITY METRICS BASE |
Package Cache | pkgcache (reported in
the metrics.xml document) pkgcache_metrics |
ACTIVITY METRICS BASE |
Unit of Work | uow (reported in the metrics.xml
document) uow_metrics |
REQUEST METRICS BASE |
Statistics | event_scstats (reported in the metrics document) | REQUEST METRICS BASE |
Statistics | event_wlstats (reported in the metrics document) | REQUEST METRICS BASE |
Usage
This monitor element, along with the other pool_queued_async_..._pages elements tells you how many pages of data were retrieved by prefetch requests. You can use this information to determine whether prefetch requests are being performed efficiently on your system. For example, you can calculate the average number of pages per prefetch request using a formula like the one that follows: (POOL_QUEUED_ASYNC_DATA_PAGES +
POOL_QUEUED_ASYNC_INDEX_PAGES +
POOL_QUEUED_ASYNC_XDA_PAGES +
POOL_QUEUED_ASYNC_COL_PAGES +
POOL_QUEUED_ASYNC_TEMP_DATA_PAGES +
POOL_QUEUED_ASYNC_TEMP_INDEX_PAGES +
POOL_QUEUED_ASYNC_TEMP_XDA_PAGES +
POOL_QUEUED_ASYNC_TEMP_COL_PAGES)
÷
(POOL_QUEUED_ASYNC_DATA_REQS +
POOL_QUEUED_ASYNC_INDEX_REQS +
POOL_QUEUED_ASYNC_XDA_REQS +
POOL_QUEUED_ASYNC_COL_REQS +
POOL_QUEUED_ASYNC_TEMP_DATA_REQS +
POOL_QUEUED_ASYNC_TEMP_INDEX_REQS +
POOL_QUEUED_ASYNC_TEMP_XDA_REQS +
POOL_QUEUED_ASYNC_TEMP_COL_REQS)
If the average number
of pages per request is low, and there is a significant amount of
prefetching on the system, then your system might be performing more
IO operations than necessary. In general, request size is based on
prefetch size, which should be at least as large as extent size.
So a small average request size might indicate that prefetch size
is set too low, and that increasing prefetch size to a multiple of
extent size may improve performance. Also note that a small average
request size may mean that the prefetch queues fill up too quickly,
so it is worthwhile to also monitor the associated pool_failed_async_..._reqs monitor
elementpool_queued_async_temp_xda_reqs - XDA data prefetch requests for temporary table spaces monitor element
The number of XML storage object (XDA) data prefetch requests for temporary table spaces successfully added to the prefetch queue.
Event Type | Logical Data Grouping | Monitor Switch |
---|---|---|
Activities | event_activity (reported in details_xml document) event_activitymetrics |
ACTIVITY METRICS BASE |
Package Cache | pkgcache (reported in
the metrics.xml document) pkgcache_metrics |
ACTIVITY METRICS BASE |
Unit of Work | uow (reported in the metrics.xml
document) uow_metrics |
REQUEST METRICS BASE |
Statistics | event_scstats (reported in the metrics document) | REQUEST METRICS BASE |
Statistics | event_wlstats (reported in the metrics document) | REQUEST METRICS BASE |
Usage
This element, along with the other pool_queued_async_*_reqs elements tells you how many prefetch requests have been added to the prefetch queue. You can use this information to see how often the database manager does prefetching. You can use these elements in conjunction with other prefetcher monitor elements to determine how effectively prefetching is done on your system. For example, you can see what the percentage of requests were successfully added to the prefetch queue using a formula like the one that follows: 1 -
(
POOL_FAILED_ASYNC_DATA_REQS +
POOL_FAILED_ASYNC_INDEX_REQS +
POOL_FAILED_ASYNC_XDA_REQS +
POOL_FAILED_ASYNC_COL_REQS +
POOL_FAILED_ASYNC_TEMP_DATA_REQS +
POOL_FAILED_ASYNC_TEMP_INDEX_REQS +
POOL_FAILED_ASYNC_TEMP_XDA_REQS +
POOL_FAILED_ASYNC_TEMP_COL_REQS )
÷
(
(
POOL_FAILED_ASYNC_DATA_REQS +
POOL_FAILED_ASYNC_INDEX_REQS +
POOL_FAILED_ASYNC_XDA_REQS +
POOL_FAILED_ASYNC_COL_REQS +
POOL_FAILED_ASYNC_TEMP_DATA_REQS +
POOL_FAILED_ASYNC_TEMP_INDEX_REQS +
POOL_FAILED_ASYNC_TEMP_XDA_REQS +
POOL_FAILED_ASYNC_TEMP_COL_REQS )
+
(
POOL_QUEUED_ASYNC_DATA_REQS +
POOL_QUEUED_ASYNC_INDEX_REQS +
POOL_QUEUED_ASYNC_XDA_REQS +
POOL_QUEUED_ASYNC_COL_REQS +
POOL_QUEUED_ASYNC_TEMP_DATA_REQS +
POOL_QUEUED_ASYNC_TEMP_INDEX_REQS +
POOL_QUEUED_ASYNC_TEMP_XDA_REQS +
POOL_QUEUED_ASYNC_TEMP_COL_REQS )
) * 100
This formula calculates the ratio of failed
prefetch requests to the total number of requests made. A failed prefetch
request is one that could not be added to the prefetch queue. If the
percentage is low, you might need to configure more prefetchers by
modifying the num_ioservers configuration parameter.pool_queued_async_xda_pages - XDA pages prefetch requests monitor element
The number of XML storage object (XDA) data pages successfully requested for prefetching.
Event Type | Logical Data Grouping | Monitor Switch |
---|---|---|
Activities | event_activity (reported in details_xml document) event_activitymetrics |
ACTIVITY METRICS BASE |
Package Cache | pkgcache (reported in
the metrics.xml document) pkgcache_metrics |
ACTIVITY METRICS BASE |
Unit of Work | uow (reported in the metrics.xml
document) uow_metrics |
REQUEST METRICS BASE |
Statistics | event_scstats (reported in the metrics document) | REQUEST METRICS BASE |
Statistics | event_wlstats (reported in the metrics document) | REQUEST METRICS BASE |
Usage
This monitor element, along with the other pool_queued_async_..._pages elements tells you how many pages of data were retrieved by prefetch requests. You can use this information to determine whether prefetch requests are being performed efficiently on your system. For example, you can calculate the average number of pages per prefetch request using a formula like the one that follows: (POOL_QUEUED_ASYNC_DATA_PAGES +
POOL_QUEUED_ASYNC_INDEX_PAGES +
POOL_QUEUED_ASYNC_XDA_PAGES +
POOL_QUEUED_ASYNC_COL_PAGES +
POOL_QUEUED_ASYNC_TEMP_DATA_PAGES +
POOL_QUEUED_ASYNC_TEMP_INDEX_PAGES +
POOL_QUEUED_ASYNC_TEMP_XDA_PAGES +
POOL_QUEUED_ASYNC_TEMP_COL_PAGES)
÷
(POOL_QUEUED_ASYNC_DATA_REQS +
POOL_QUEUED_ASYNC_INDEX_REQS +
POOL_QUEUED_ASYNC_XDA_REQS +
POOL_QUEUED_ASYNC_COL_REQS +
POOL_QUEUED_ASYNC_TEMP_DATA_REQS +
POOL_QUEUED_ASYNC_TEMP_INDEX_REQS +
POOL_QUEUED_ASYNC_TEMP_XDA_REQS +
POOL_QUEUED_ASYNC_TEMP_COL_REQS)
If the average number
of pages per request is low, and there is a significant amount of
prefetching on the system, then your system might be performing more
IO operations than necessary. In general, request size is based on
prefetch size, which should be at least as large as extent size.
So a small average request size might indicate that prefetch size
is set too low, and that increasing prefetch size to a multiple of
extent size may improve performance. Also note that a small average
request size may mean that the prefetch queues fill up too quickly,
so it is worthwhile to also monitor the associated pool_failed_async_..._reqs monitor
elementpool_queued_async_xda_reqs - XDA prefetch requests monitor element
The number of XML storage object (XDA) data prefetch requests successfully added to the prefetch queue.
Event Type | Logical Data Grouping | Monitor Switch |
---|---|---|
Activities | event_activity (reported in details_xml document) event_activitymetrics |
ACTIVITY METRICS BASE |
Package Cache | pkgcache (reported in
the metrics.xml document) pkgcache_metrics |
ACTIVITY METRICS BASE |
Unit of Work | uow (reported in the metrics.xml
document) uow_metrics |
REQUEST METRICS BASE |
Statistics | event_scstats (reported in the metrics document) | REQUEST METRICS BASE |
Statistics | event_wlstats (reported in the metrics document) | REQUEST METRICS BASE |
Usage
This element, along with the other pool_queued_async_*_reqs elements tells you how many prefetch requests have been added to the prefetch queue. You can use this information to see how often the database manager does prefetching. You can use these elements in conjunction with other prefetcher monitor elements to determine how effectively prefetching is done on your system. For example, you can see what the percentage of requests were successfully added to the prefetch queue using a formula like the one that follows: 1 -
(
POOL_FAILED_ASYNC_DATA_REQS +
POOL_FAILED_ASYNC_INDEX_REQS +
POOL_FAILED_ASYNC_XDA_REQS +
POOL_FAILED_ASYNC_COL_REQS +
POOL_FAILED_ASYNC_TEMP_DATA_REQS +
POOL_FAILED_ASYNC_TEMP_INDEX_REQS +
POOL_FAILED_ASYNC_TEMP_XDA_REQS +
POOL_FAILED_ASYNC_TEMP_COL_REQS )
÷
(
(
POOL_FAILED_ASYNC_DATA_REQS +
POOL_FAILED_ASYNC_INDEX_REQS +
POOL_FAILED_ASYNC_XDA_REQS +
POOL_FAILED_ASYNC_COL_REQS +
POOL_FAILED_ASYNC_TEMP_DATA_REQS +
POOL_FAILED_ASYNC_TEMP_INDEX_REQS +
POOL_FAILED_ASYNC_TEMP_XDA_REQS +
POOL_FAILED_ASYNC_TEMP_COL_REQS )
+
(
POOL_QUEUED_ASYNC_DATA_REQS +
POOL_QUEUED_ASYNC_INDEX_REQS +
POOL_QUEUED_ASYNC_XDA_REQS +
POOL_QUEUED_ASYNC_COL_REQS +
POOL_QUEUED_ASYNC_TEMP_DATA_REQS +
POOL_QUEUED_ASYNC_TEMP_INDEX_REQS +
POOL_QUEUED_ASYNC_TEMP_XDA_REQS +
POOL_QUEUED_ASYNC_TEMP_COL_REQS )
) * 100
This formula calculates the ratio of failed
prefetch requests to the total number of requests made. A failed prefetch
request is one that could not be added to the prefetch queue. If the
percentage is low, you might need to configure more prefetchers by
modifying the num_ioservers configuration parameter.pool_read_time - Total buffer pool physical read time monitor element
Indicates the total amount of time spent reading in data and index pages from the table space containers (physical) for all types of table spaces. This value is given in milliseconds.
Snapshot Level | Logical Data Grouping | Monitor Switch |
---|---|---|
Database | dbase | Buffer Pool |
Table Space | tablespace | Buffer Pool |
Buffer Pool | bufferpool | Buffer Pool |
Application | appl | Buffer Pool |
Event Type | Logical Data Grouping | Monitor Switch |
---|---|---|
Activities | event_activity (reported in the details_xml document) | ACTIVITY METRICS BASE |
Activities | event_activitymetrics | ACTIVITY METRICS BASE |
Statistics | event_scstats (reported in the metrics document) | REQUEST METRICS BASE |
Statistics | event_wlstats (reported in the metrics document) | REQUEST METRICS BASE |
Database | event_db | Always collected |
Tablespaces | event_tablespace | Always collected |
Connection | event_conn | Always collected |
Package cache | Reported in the activity_metrics document. | ACTIVITY METRICS BASE |
Usage
You can use this element with pool_data_p_reads, pool_col_p_reads, pool_xda_p_reads and pool_index_p_reads monitor elements to calculate the average page-read time. This average is important since it may indicate the presence of an I/O wait, which in turn may indicate that you should be moving data to a different device.
At the database and table space levels, this element includes the value of the pool_async_read_time monitor element.
- MON_GET_DATABASE
- MON_GET_DATABASE_DETAILS
- MON_GET_BUFFERPOOL
- MON_GET_TABLESPACE
- MON_GET_CONTAINER
pool_secondary_id - Memory Pool Secondary Identifier monitor element
An additional identifier to help determine the memory pool for which monitor data is returned.
- Element identifier
- pool_secondary_id
- Element type
- Information
Snapshot Level | Logical Data Grouping | Monitor Switch |
---|---|---|
Database Manager | memory_pool | Basic |
Database | memory_pool | Basic |
Application | memory_pool | Basic |
Event Type | Logical Data Grouping | Monitor Switch |
---|---|---|
Database | event_dbmemuse | Always collected |
Connection | event_connmemuse | Always collected |
- Usage
- Use together with pool_id to determine
the memory pool for which monitor
data is returned. Data for pool_secondary_id only appears when necessary.
For example, it appears when the pool_id indicated is Buffer Pool
Heap to
determine which buffer pool the monitor data relates to.
When a database is created, it has a default buffer pool, called IBMDEFAULTBP, with a size determined by the platform. This buffer pool has a secondary id of "1". In addition to this buffer pool and any buffer pools that you create, a set of system buffer pools are created by default, each corresponding to a different page size. IDs for these buffer pools can appear in snapshots for pool_secondary_id:
- System 32k buffer pool
- System 16k buffer pool
- System 8k buffer pool
- System 4k buffer pool
pool_sync_data_gbp_reads - Synchronous group buffer pool data reads monitor element
On a DB2 pureScale environment, the number of times a data page was expected to be in the bufferpool, but was instead retrieved from the group bufferpool. This value will be 0 for environments outside of a DB2 pureScale environment.
pool_sync_data_reads - Synchronous buffer pool data reads monitor element
The number of times a data page was expected to be in the bufferpool, but was instead read from disk.
pool_sync_index_gbp_reads - Synchronous group buffer pool index reads monitor element
On a DB2 pureScale environment, the number of times an index page was expected to be in the bufferpool, but was instead retrieved from the group bufferpool. This value will be 0 for environments outside of a DB2 pureScale environment.
pool_sync_index_reads - Synchronous buffer pool index reads monitor element
The number of times an index page was expected to be in the bufferpool, but was instead read from disk.
pool_sync_xda_gbp_reads - Synchronous group buffer pool XDA data reads monitor element
On a DB2 pureScale environment, the number of times an XML page was expected to be in the bufferpool, but was instead retrieved from the group bufferpool. This value will be 0 for environments outside of a DB2 pureScale environment.
pool_sync_xda_reads - Synchronous buffer pool XDA data reads monitor element
The number of times an XML page was expected to be in the bufferpool, but was instead read from disk.
pool_temp_col_l_reads - Buffer pool column-organized temporary logical reads monitor element
Indicates the number of column-organized pages which have been requested from the buffer pool (logical) for temporary table spaces.
Event Type | Logical Data Grouping | Monitor Switch |
---|---|---|
Activities | event_activity (reported in the details_xml document) | ACTIVITY METRICS BASE |
Activities | event_activitymetrics | ACTIVITY METRICS BASE |
Statistics | event_scstats (reported in the metrics document) | REQUEST METRICS BASE |
Statistics | event_wlstats (reported in the metrics document) | REQUEST METRICS BASE |
Activities | event_activity | Buffer Pool, Statement |
Package cache | Reported in the activity_metrics document. | ACTIVITY METRICS BASE |
Usage
In conjunction with the pool_temp_col_p_reads element, you can calculate the data page hit ratio for buffer pools located in temporary table spaces.
For more information, see Formulas for calculating buffer pool hit ratios.
Column-organized monitor elements can help you understand what portion of the I/O is being driven by access to column-organized tables when a workload includes a mixture of both row-organized and column-organized tables.
pool_temp_col_p_reads - Buffer pool column-organized temporary physical reads monitor element
Indicates the number of column-organized pages read in from the table space containers (physical) for temporary table spaces.
Event Type | Logical Data Grouping | Monitor Switch |
---|---|---|
Activities | event_activity (reported in the details_xml document) | ACTIVITY METRICS BASE |
Activities | event_activitymetrics | ACTIVITY METRICS BASE |
Statistics | event_scstats (reported in the details_xml document) | REQUEST METRICS BASE |
Statistics | event_wlstats (reported in the details_xml document) | REQUEST METRICS BASE |
Package cache | Reported in the activity_metrics document. | ACTIVITY METRICS BASE |
Unit of work |
Reported in the system_metrics document. |
REQUEST METRICS BASE |
Usage
In conjunction with the pool_temp_col_l_reads element, you can calculate the data page hit ratio for buffer pools located in temporary table spaces. For more information, see Formulas for calculating buffer pool hit ratios
pool_temp_data_l_reads - Buffer pool temporary data logical reads monitor element
Indicates the number of data pages which have been requested from the buffer pool (logical) for temporary table spaces.
Snapshot Level | Logical Data Grouping | Monitor Switch |
---|---|---|
Database | dbase | Buffer Pool |
Table Space | tablespace | Buffer Pool |
Buffer Pool | bufferpool | Buffer Pool |
Application | appl | Buffer Pool |
Application | stmt | Buffer Pool |
Dynamic SQL | dynsql | Buffer Pool, Statement |
Event Type | Logical Data Grouping | Monitor Switch |
---|---|---|
Activities | event_activity (reported in the details_xml document) | ACTIVITY METRICS BASE |
Activities | event_activitymetrics | ACTIVITY METRICS BASE |
Statistics | event_scstats (reported in the metrics document) | REQUEST METRICS BASE |
Statistics | event_wlstats (reported in the metrics document) | REQUEST METRICS BASE |
Database | event_db | Always collected |
Tablespaces | event_tablespace | Always collected |
Connection | event_conn | Always collected |
Statement | event_stmt | Always collected |
Activities | event_activity | Buffer Pool, Statement |
Package cache | Reported in the activity_metrics document. | ACTIVITY METRICS BASE |
Usage
In conjunction with the pool_temp_data_p_reads element, you can calculate the data page hit ratio for buffer pools located in temporary table spaces.
For more information, see Formulas for calculating buffer pool hit ratios.
pool_temp_data_p_reads - Buffer pool temporary data physical reads monitor element
Indicates the number of data pages read in from the table space containers (physical) for temporary table spaces.
Snapshot Level | Logical Data Grouping | Monitor Switch |
---|---|---|
Database | dbase | Buffer Pool |
Table Space | tablespace | Buffer Pool |
Buffer Pool | bufferpool | Buffer Pool |
Application | appl | Buffer Pool |
Application | stmt | Buffer Pool |
Dynamic SQL | dynsql | Buffer Pool, Statement |
Event Type | Logical Data Grouping | Monitor Switch |
---|---|---|
Activities | event_activity (reported in the details_xml document) | ACTIVITY METRICS BASE |
Activities | event_activitymetrics | ACTIVITY METRICS BASE |
Statistics | event_scstats (reported in the details_xml document) | REQUEST METRICS BASE |
Statistics | event_wlstats (reported in the details_xml document) | REQUEST METRICS BASE |
Database | event_db | Always collected |
Tablespaces | event_tablespace | Always collected |
Connection | event_conn | Always collected |
Statement | event_stmt | Always collected |
Activities | event_activity | Buffer Pool, Statement |
Package cache | Reported in the activity_metrics document. | ACTIVITY METRICS BASE |
Unit of work |
Reported in the system_metrics document. |
REQUEST METRICS BASE |
Usage
The functionality to record buffer pool information at the statement level is supported for API and CLP snapshot requests.
In conjunction with the pool_temp_data_l_reads element, you can calculate the data page hit ratio for buffer pools located in temporary table spaces. For more information, see Formulas for calculating buffer pool hit ratios
pool_temp_index_l_reads - Buffer pool temporary index logical reads monitor element
Indicates the number of index pages which have been requested from the buffer pool (logical) for temporary table spaces.
Snapshot Level | Logical Data Grouping | Monitor Switch |
---|---|---|
Database | dbase | Buffer Pool |
Table Space | tablespace | Buffer Pool |
Buffer Pool | bufferpool | Buffer Pool |
Application | appl | Buffer Pool |
Application | stmt | Buffer Pool |
Dynamic SQL | dynsql | Buffer Pool, Statement |
Event Type | Logical Data Grouping | Monitor Switch |
---|---|---|
Activities | event_activity (reported in the details_xml document) | ACTIVITY METRICS BASE |
Activities | event_activitymetrics | ACTIVITY METRICS BASE |
Statistics | event_scstats (reported in the metrics document) | REQUEST METRICS BASE |
Statistics | event_wlstats (reported in the metrics document) | REQUEST METRICS BASE |
Database | event_db | Always collected |
Tablespaces | event_tablespace | Always collected |
Connection | event_conn | Always collected |
Statement | event_stmt | Always collected |
Activities | event_activity | Buffer Pool, Statement |
Package cache | Reported in the activity_metrics document. | ACTIVITY METRICS BASE |
Usage
Use this element, in conjunction with the pool_temp_index_p_reads element, to calculate the index page hit ratio for buffer pools located in temporary table spaces. For more information, see Formulas for calculating buffer pool hit ratios.
pool_temp_index_p_reads - Buffer pool temporary index physical reads monitor element
Indicates the number of index pages read in from the table space containers (physical) for temporary table spaces.
Snapshot Level | Logical Data Grouping | Monitor Switch |
---|---|---|
Database | dbase | Buffer Pool |
Table Space | tablespace | Buffer Pool |
Buffer Pool | bufferpool | Buffer Pool |
Application | appl | Buffer Pool |
Application | stmt | Buffer Pool |
Dynamic SQL | dynsql | Buffer Pool, Statement |
Event Type | Logical Data Grouping | Monitor Switch |
---|---|---|
Activities | event_activity (reported in the details_xml document) | ACTIVITY METRICS BASE |
Activities | event_activitymetrics | ACTIVITY METRICS BASE |
Statistics | event_scstats (reported in the details_xml document) | REQUEST METRICS BASE |
Statistics | event_wlstats (reported in the details_xml document) | REQUEST METRICS BASE |
Database | event_db | Always collected |
Tablespaces | event_tablespace | Always collected |
Connection | event_conn | Always collected |
Statement | event_stmt | Always collected |
Activities | event_activity | Buffer Pool, Statement |
Package cache | Reported in the activity_metrics document. | ACTIVITY METRICS BASE |
Unit of work |
Reported in the system_metrics document. |
REQUEST METRICS BASE |
Usage
Use this element, in conjunction with the pool_temp_index_l_reads element, to calculate the index page hit ratio for buffer pools located in temporary table spaces. For more information, see Formulas for calculating buffer pool hit ratios.
pool_temp_xda_l_reads - Buffer pool temporary XDA data logical reads monitor element
Indicates the number of pages for XML storage object (XDA) data which have been requested from the buffer pool (logical) for temporary table spaces.
Snapshot Level | Logical Data Grouping | Monitor Switch |
---|---|---|
Database | dbase | Buffer Pool |
Table Space | tablespace | Buffer Pool |
Buffer Pool | bufferpool | Buffer Pool |
Application | appl | Buffer Pool |
Application | stmt | Buffer Pool |
Dynamic SQL | dynsql | Buffer Pool, Statement |
Event Type | Logical Data Grouping | Monitor Switch |
---|---|---|
Activities | event_activity (reported in the details_xml document) | ACTIVITY METRICS BASE |
Activities | event_activitymetrics | ACTIVITY METRICS BASE |
Statistics | event_scstats (reported in the metrics document) | REQUEST METRICS BASE |
Statistics | event_wlstats (reported in the metrics document) | REQUEST METRICS BASE |
Database | event_db | Always collected |
Tablespaces | event_tablespace | Always collected |
Connection | event_conn | Always collected |
Statement | event_stmt | Always collected |
Activities | event_activity | Buffer Pool, Statement |
Package cache | Reported in the activity_metrics document. | ACTIVITY METRICS BASE |
Usage
1 - ((pool_temp_data_p_reads + pool_temp_xda_p_reads)
/ (pool_temp_data_l_reads + pool_temp_xda_l_reads))
pool_temp_xda_p_reads - Buffer pool temporary XDA data physical reads monitor element
Indicates the number of pages for XML storage object (XDA) data read in from the table space containers (physical) for temporary table spaces.
Snapshot Level | Logical Data Grouping | Monitor Switch |
---|---|---|
Database | dbase | Buffer Pool |
Table Space | tablespace | Buffer Pool |
Buffer Pool | bufferpool | Buffer Pool |
Application | appl | Buffer Pool |
Application | stmt | Buffer Pool |
Dynamic SQL | dynsql | Buffer Pool, Statement |
Event Type | Logical Data Grouping | Monitor Switch |
---|---|---|
Activities | event_activity (reported in the details_xml document) | ACTIVITY METRICS BASE |
Activities | event_activitymetrics | ACTIVITY METRICS BASE |
Statistics | event_scstats (reported in the details_xml document) | REQUEST METRICS BASE |
Statistics | event_wlstats (reported in the details_xml document) | REQUEST METRICS BASE |
Database | event_db | Always collected |
Tablespaces | event_tablespace | Always collected |
Connection | event_conn | Always collected |
Statement | event_stmt | Always collected |
Activities | event_activity | Buffer Pool, Statement |
Package cache | Reported in the activity_metrics document. | ACTIVITY METRICS BASE |
Unit of work |
Reported in the system_metrics document. |
REQUEST METRICS BASE |
Usage
1 - ((pool_temp_data_p_reads + pool_temp_xda_p_reads) / (pool_temp_data_l_reads + pool_temp_xda_l_reads))
pool_watermark - Memory Pool Watermark monitor element
The largest size of a memory pool since its creation. The value is given in bytes.
- Element identifier
- pool_watermark
- Element type
- Information
Snapshot Level | Logical Data Grouping | Monitor Switch |
---|---|---|
Database Manager | memory_pool | Basic |
Database | memory_pool | Basic |
Application | memory_pool | Basic |
Event Type | Logical Data Grouping | Monitor Switch |
---|---|---|
Database | event_dbmemuse | Always collected |
Connection | event_connmemuse | Always collected |
- Usage
- On continuously running systems, you can use the pool_watermark and pool_config_size elements together to predict potential
memory problems.
For example, take a snapshot at regular intervals (for example, daily), and examine the pool_watermark and pool_config_size values. If you observe that the value of pool_watermark is becoming increasingly close to pool_config_size (a premature indication of potential future memory-related problems), this may indicate that you should increase the size of the memory pool.
pool_write_time - Total buffer pool physical write time monitor element
Cumulative elapsed time for each asynchronous write to complete. This value is reported in milliseconds.
Snapshot Level | Logical Data Grouping | Monitor Switch |
---|---|---|
Database | dbase | Buffer Pool |
Table Space | tablespace | Buffer Pool |
Buffer Pool | bufferpool | Buffer Pool |
Application | appl | Buffer Pool |
Event Type | Logical Data Grouping | Monitor Switch |
---|---|---|
Activities | event_activity (reported in the details_xml document) | ACTIVITY METRICS BASE |
Activities | event_activitymetrics | ACTIVITY METRICS BASE |
Statistics | event_scstats (reported in the metrics document) | REQUEST METRICS BASE |
Statistics | event_wlstats (reported in the metrics document) | REQUEST METRICS BASE |
Unit of work | Reported in the system_metrics document. | REQUEST METRICS BASE |
Database | event_db | Always collected |
Tablespaces | event_tablespace | Always collected |
Connection | event_conn | Always collected |
Package cache | Reported in the activity_metrics document. | ACTIVITY METRICS BASE |
Usage
Use this element with pool_data_writes and pool_index_writes monitor elements to calculate the average page-write time. This average is important since it may indicate the presence of an I/O wait, which in turn may indicate that you should be moving data to a different device.
At the database and table space levels, this element includes the value of the pool_async_write_time monitor element.
- MON_GET_DATABASE
- MON_GET_DATABASE_DETAILS
- MON_GET_BUFFERPOOL
- MON_GET_TABLESPACE
- MON_GET_CONTAINER
pool_xda_gbp_indep_pages_found_in_lbp - Group buffer pool XDA independent pages found in local buffer pool monitor element
The number of group buffer pool (GBP) independent XML storage object (XDA) data pages found in a local buffer pool (LBP) by an agent.
Event Type | Logical Data Grouping | Monitor Switch |
---|---|---|
Activities | event_activitymetrics | ACTIVITY METRICS BASE |
Package Cache | pkgcache_metrics | ACTIVITY METRICS BASE |
Statistics | event_scmetrics event_wlmetrics |
REQUEST METRICS BASE |
Unit of Work | uow_metrics | REQUEST METRICS BASE |
pool_xda_gbp_invalid_pages - Group buffer pool invalid XDA data pages monitor element
The number of times a request for a data page for an XML storage object (XDA) was made from the group buffer pool due to the page being marked invalid in the local buffer pool. Outside of a DB2 pureScale environment, this value is null.
Event Type | Logical Data Grouping | Monitor Switch |
---|---|---|
Activities | event_activity (reported in the details_xml document) | ACTIVITY METRICS BASE |
Activities | event_activitymetrics | ACTIVITY METRICS BASE |
Statistics | event_scstats (reported in the metrics document) | REQUEST METRICS BASE |
Statistics | event_wlstats (reported in the metrics document) | REQUEST METRICS BASE |
Unit of work | Reported in the system_metrics document. | REQUEST METRICS BASE |
Package cache | Reported in the activity_metrics document. | ACTIVITY METRICS BASE |
Usage
(pool_xda_lbp_pages_found - pool_async_xda_lbp_pages_found) / pool_xda_l_reads
To
determine how many times a requested XDA page was found in the group
buffer pool, use the following formula(pool_xda_gbp_l_reads - pool_xda_gbp_p_reads) / pool_xda_gbp_l_reads
Local
buffer pool and group buffer pool hit rates are both important factors
in the overall performance of the cluster caching facility.
Using these formulas can help you determine whether the local or group
buffer pool may be a limiting factor on your database's throughput. pool_xda_gbp_l_reads - Group buffer pool XDA data logical read requests monitor element
The number of times a GBP dependent data page for an XML storage object (XDA) was attempted to be read from the group buffer pool because the page was either invalid or not present in the local buffer pool. Outside of a DB2 pureScale environment, this value is null.
Event Type | Logical Data Grouping | Monitor Switch |
---|---|---|
Activities | event_activity (reported in the details_xml document) | ACTIVITY METRICS BASE |
Activities | event_activitymetrics | ACTIVITY METRICS BASE |
Statistics | event_scstats (reported in the metrics document) | REQUEST METRICS BASE |
Statistics | event_wlstats (reported in the metrics document) | REQUEST METRICS BASE |
Unit of work | Reported in the system_metrics document. | REQUEST METRICS BASE |
Package cache | Reported in the activity_metrics document. | ACTIVITY METRICS BASE |
Usage
(pool_xda_lbp_pages_found - pool_async_xda_lbp_pages_found) / pool_xda_l_reads
To
determine how many times a requested XDA page was found in the group
buffer pool, use the following formula(pool_xda_gbp_l_reads - pool_xda_gbp_p_reads) / pool_xda_gbp_l_reads
Local
buffer pool and group buffer pool hit rates are both important factors
in the overall performance of the cluster caching facility.
Using these formulas can help you determine whether the local or group
buffer pool may be a limiting factor on your database's throughput. pool_xda_gbp_p_reads - Group buffer pool XDA data physical read requests monitor element
The number of times a GBP dependent data page for an XML storage object (XDA) was read into the local buffer pool from disk because it was not found in the group buffer pool. Outside of a DB2 pureScale environment, this value is null.
Event Type | Logical Data Grouping | Monitor Switch |
---|---|---|
Activities | event_activity (reported in the details_xml document) | ACTIVITY METRICS BASE |
Activities | event_activitymetrics | ACTIVITY METRICS BASE |
Statistics | event_scstats (reported in the metrics document) | REQUEST METRICS BASE |
Statistics | event_wlstats (reported in the metrics document) | REQUEST METRICS BASE |
Unit of work | Reported in the system_metrics document. | REQUEST METRICS BASE |
Package cache | Reported in the activity_metrics document. | ACTIVITY METRICS BASE |
Usage
(pool_xda_lbp_pages_found - pool_async_xda_lbp_pages_found) / pool_xda_l_reads
To
determine how many times a requested XDA page was found in the group
buffer pool, use the following formula(pool_xda_gbp_l_reads - pool_xda_gbp_p_reads) / pool_xda_gbp_l_reads
Local
buffer pool and group buffer pool hit rates are both important factors
in the overall performance of the cluster caching facility.
Using these formulas can help you determine whether the local or group
buffer pool may be a limiting factor on your database's throughput. pool_xda_l_reads - Buffer pool XDA data logical reads monitor element
Indicates the number of data pages for XML storage objects (XDAs) which have been requested from the buffer pool (logical) for regular and large table spaces.
Snapshot Level | Logical Data Grouping | Monitor Switch |
---|---|---|
Database | dbase | Buffer Pool |
Table Space | tablespace | Buffer Pool |
Buffer Pool | bufferpool | Buffer Pool |
Application | appl | Buffer Pool |
Application | stmt | Buffer Pool |
Dynamic SQL | dynsql | Buffer Pool, Statement |
Event Type | Logical Data Grouping | Monitor Switch |
---|---|---|
Activities | event_activity (reported in the details_xml document) | ACTIVITY METRICS BASE |
Activities | event_activitymetrics | ACTIVITY METRICS BASE |
Statistics | event_scstats (reported in the details_xml document) | REQUEST METRICS BASE |
Statistics | event_wlstats (reported in the details_xml document) | REQUEST METRICS BASE |
Unit of work | Reported in the system_metrics document. | Always collected |
Database | event_db | Always collected |
Tablespaces | event_tablespace | Always collected |
Connection | event_conn | Always collected |
Statement | event_stmt | Always collected |
Activities | event_activity | Buffer Pool, Statement |
Package cache | Reported in the activity_metrics document. | ACTIVITY METRICS BASE |
Usage
- Already in the buffer pool when the database manager needs to process the page
- Read into the buffer pool before the database manager can process the page.
Use the pool_xda_l_reads, pool_xda_p_reads, pool_data_l_reads, and pool_data_p_reads monitor elements to calculate the data page hit ratio for the buffer pool. For more information, see Formulas for calculating buffer pool hit ratios
((pool_data_lbp_pages_found
+ pool_index_lbp_pages_found
+ pool_xda_lbp_pages_found
+ pool_col_lbp_pages_found
- pool_async_data_lbp_pages_found
- pool_async_index_lbp_pages_found - pool_async_xda_lbp_pages_found
- pool_async_col_lbp_pages_found)
/ (pool_data_l_reads
+ pool_index_l_reads + pool_xda_l_reads + pool_col_l_reads + pool_temp_data_l_reads
+ pool_temp_xda_l_reads + pool_temp_index_l_reads + pool_temp_col_l_reads))
× 100
This
calculation takes into account all of the pages (index and data) that
are cached by the buffer pool.Increasing buffer pool size will generally improve the hit ratio, but you will reach a point of diminishing return. Ideally, if you could allocate a buffer pool large enough to store your entire database, then once the system is up and running you would get a hit ratio of 100%. However, this is unrealistic in most cases. The significance of the hit ratio depends on the size of your data, and the way it is accessed. A very large database where data is accessed evenly would have a poor hit ratio. There is little you can do with very large tables. In such case, you would focus your attention on smaller, frequently accessed tables, and on the indexes.
pool_xda_lbp_pages_found - Local buffer pool XDA data pages found monitor element
The number of times a data page for an XML storage object (XDA) was requested from and found in the local buffer pool.
Event Type | Logical Data Grouping | Monitor Switch |
---|---|---|
Activities | event_activity (reported in the details_xml document) | ACTIVITY METRICS BASE |
Activities | event_activitymetrics | ACTIVITY METRICS BASE |
Statistics | event_scstats (reported in the metrics document) | REQUEST METRICS BASE |
Statistics | event_wlstats (reported in the metrics document) | REQUEST METRICS BASE |
Unit of work | Reported in the system_metrics document. | REQUEST METRICS BASE |
Package cache | Reported in the activity_metrics document. | ACTIVITY METRICS BASE |
Usage
(pool_xda_lbp_pages_found - pool_async_xda_lbp_pages_found) / pool_xda_l_reads
To
determine how many times a requested XDA page was found in the group
buffer pool, use the following formula(pool_xda_gbp_l_reads - pool_xda_gbp_p_reads) / pool_xda_gbp_l_reads
Local
buffer pool and group buffer pool hit rates are both important factors
in the overall performance of the cluster caching facility.
Using these formulas can help you determine whether the local or group
buffer pool may be a limiting factor on your database's throughput. pool_xda_p_reads - Buffer pool XDA data physical reads monitor element
Indicates the number of data pages for XML storage objects (XDAs) read in from the table space containers (physical) for regular and large table spaces.
Snapshot Level | Logical Data Grouping | Monitor Switch |
---|---|---|
Database | dbase | Buffer Pool |
Table Space | tablespace | Buffer Pool |
Buffer Pool | bufferpool | Buffer Pool |
Application | appl | Buffer Pool |
Application | stmt | Buffer Pool |
Dynamic SQL | dynsql | Buffer Pool, Statement |
Event Type | Logical Data Grouping | Monitor Switch |
---|---|---|
Activities | event_activity (reported in the details_xml document) | ACTIVITY METRICS BASE |
Activities | event_activitymetrics | ACTIVITY METRICS BASE |
Statistics | event_scstats (reported in the metrics document) | REQUEST METRICS BASE |
Statistics | event_wlstats (reported in the metrics document) | REQUEST METRICS BASE |
Unit of work | Reported in the system_metrics document. | REQUEST METRICS BASE |
Database | event_db | Always collected |
Tablespaces | event_tablespace | Always collected |
Connection | event_conn | Always collected |
Statement | event_stmt | Always collected |
Activities | event_activity | Buffer Pool, Statement |
Package cache | Reported in the activity_metrics document. | ACTIVITY METRICS BASE |
Usage
pool_xda_p_reads + pool_temp_xda_p_reads - pool_async_xda_reads
By comparing the ratio of asynchronous to synchronous reads, you can gain insight into how well the prefetchers are working. This element can be helpful when you are tuning the num_ioservers configuration parameter.
Use the pool_xda_l_reads, pool_xda_p_reads, pool_data_l_reads, and pool_data_p_reads monitor elements to calculate the data page hit ratio for the buffer pool. For more information, see Formulas for calculating buffer pool hit ratios
pool_xda_writes - Buffer pool XDA data writes monitor element
Indicates the number of times a buffer pool data page for an XML storage object (XDA) was physically written to disk.
Snapshot Level | Logical Data Grouping | Monitor Switch |
---|---|---|
Database | dbase | Buffer Pool |
Table Space | tablespace | Buffer Pool |
Buffer Pool | bufferpool | Buffer Pool |
Application | appl | Buffer Pool |
Event Type | Logical Data Grouping | Monitor Switch |
---|---|---|
Activities | event_activity (reported in the details_xml document) | ACTIVITY METRICS BASE |
Activities | event_activitymetrics | ACTIVITY METRICS BASE |
Statistics | event_scstats (reported in the metrics document) | REQUEST METRICS BASE |
Statistics | event_wlstats (reported in the metrics document) | REQUEST METRICS BASE |
Unit of work | Reported in the system_metrics document. | REQUEST METRICS BASE |
Database | event_db | Always collected |
Tablespaces | event_tablespace | Always collected |
Connection | event_conn | Always collected |
Package cache | Reported in the activity_metrics document. | ACTIVITY METRICS BASE |
Usage
This monitor element helps you to assess whether performance may be improved by increasing the number of buffer pool pages available for the database. For databases containing XML data, you should consider the ratio of buffer pool page writes to buffer pool page reads both for XML data (using the pool_xda_writes and the pool_xda_p_reads monitor elements) and for relational data types (using the pool_data_writes and the pool_data_p_reads monitor elements).
Use the pool_xda_l_reads, pool_xda_p_reads, pool_data_l_reads, and pool_data_p_reads monitor elements to calculate the data page hit ratio for the buffer pool. For more information, see Formulas for calculating buffer pool hit ratios
port_number - Port number monitor element
The TCP/IP port that a member is listening on for client connections.
Table Function | Monitor Element Collection Level |
---|---|
DB_MEMBERS table function | Always collected |
MON_GET_SERVERLIST table function - get member priority details | Always collected |
post_shrthreshold_hash_joins - Post threshold hash joins monitor element
The total number of hash joins that were throttled back by the sort memory throttling algorithm. A throttled hash join is a hash join that was granted less memory than requested by the sort memory manager.
Snapshot Level | Logical Data Grouping | Monitor Switch |
---|---|---|
Database | dbase | - |
Event Type | Logical Data Grouping | Monitor Switch |
---|---|---|
Activities | event_activitymetrics | ACTIVITY METRICS BASE |
Database | event_db | Always collected |
Package cache | pkgcache_metrics | ACTIVITY METRICS BASE |
Statistics | event_scmetrics | REQUEST METRICS BASE |
Statistics | event_wlmmetrics | REQUEST METRICS BASE |
Unit of work | uow_metrics | REQUEST METRICS BASE |
A hash join is throttled back when the memory allocation from the shared sort heap is close to the limit set by database configuration parameter sheapthres_shr . This throttling will significantly reduce the number of overflows over sheapthres_shr limit in a system that is not properly configured. The data reported in this element only reflects hash joins using memory allocated from the shared sort heap.
post_shrthreshold_sorts - Post shared threshold sorts monitor element
The total number of sorts that were throttled back by the sort memory throttling algorithm. A throttled sort is a sort that was granted less memory than requested by the sort memory manager.
Snapshot Level | Logical Data Grouping | Monitor Switch |
---|---|---|
Database | dbase | Sort |
For snapshot monitoring, this counter can be reset.
Event Type | Logical Data Grouping | Monitor Switch |
---|---|---|
Activities | event_activity (reported in the details_xml document) | ACTIVITY METRICS BASE |
Activities | event_activitymetrics | ACTIVITY METRICS BASE |
Statistics | event_scstats (reported in the metrics document) | REQUEST METRICS BASE |
Statistics | event_wlstats (reported in the metrics document) | REQUEST METRICS BASE |
Unit of work | Reported in the system_metrics document. | REQUEST METRICS BASE |
Database | event_db | Always collected |
Package cache | Reported in the activity_metrics document. | ACTIVITY METRICS BASE |
Usage
A sort is throttled back when the memory allocation for sorts is close to the limit set by database configuration parameter sheapthres_shr. This throttling will significantly reduce the number of overflows over sheapthres_shr limit in a system that is not properly configured. The data reported in this element only reflects sorts using memory allocated from the shared sort heap.
post_threshold_col_vector_consumers - Post-threshold columnar vector memory consumers monitor element
The number of columnar vector memory consumers that requested memory after the sort heap threshold was exceeded.
Event type | Logical data grouping | Monitor switch |
---|---|---|
Activity Metrics | event_activitymetrics | ACTIVITY METRICS BASE |
Package Cache Metrics | pkgcache_metrics | ACTIVITY METRICS BASE |
Statistics Metrics | event_scmetrics | REQUEST METRICS BASE |
Statistics Metrics | event_wlmetrics | REQUEST METRICS BASE |
Unit of Work Metrics | uow_metrics | REQUEST METRICS BASE |
Usage
Use the post_threshold_col_vector_consumers monitor element to help configure the amount of sort heap memory that is available to applications.
- It throttles the use of sort memory for the sort memory consumer.
- If the sort memory consumer cannot continue with less memory, the database manager suspends the sort memory consumer operation and writes it to disk to free sort memory.
post_threshold_hash_grpbys - Hash GROUP BY threshold monitor element
The total number of hashed GROUP BY sort memory requests that were limited because of a concurrent use of the shared or private sort heap space.
Event Type | Logical Data Grouping | Monitor Switch |
---|---|---|
Activities | event_activitymetrics | ACTIVITY METRICS BASE |
Package Cache | pkgcache_metrics | ACTIVITY METRICS BASE |
Statistics | event_scmetrics | REQUEST METRICS BASE |
Statistics | event_wlmetrics | REQUEST METRICS BASE |
Unit of work | event_wlmetrics | REQUEST METRICS BASE |
post_threshold_hash_joins - Hash Join Threshold monitor element
The total number of times that a hash join heap request was limited due to concurrent use of shared or private sort heap space.
Snapshot Level | Logical Data Grouping | Monitor Switch |
---|---|---|
Database Manager | db2 | Basic |
Event Type | Logical Data Grouping | Monitor Switch |
---|---|---|
Activities | event_activitymetrics | ACTIVITY METRICS BASE |
Package cache | pkgcache_metrics | ACTIVITY METRICS BASE |
Statistics | event_scmetrics | REQUEST METRICS BASE |
Statistics | event_wlmmetrics | REQUEST METRICS BASE |
Unit of work | uow_metrics | REQUEST METRICS BASE |
- Usage
- If this value is large (greater than 5% of hash_join_overflows), the sort heap threshold should be increased.
post_threshold_olap_funcs - OLAP Function Threshold monitor element
The number of OLAP functions that have requested a sort heap after the sort heap threshold has been exceeded.
Snapshot Level | Logical Data Grouping | Monitor Switch |
---|---|---|
Database Manager | db2 | Basic |
Event Type | Logical Data Grouping | Monitor Switch |
---|---|---|
Activities | event_activity (in details_xml element) event_activitymetrics |
ACTIVITY METRICS BASE |
Package cache | pkgcache_metrics | ACTIVITY METRICS BASE |
Statistics | event_scstats (in metrics element) event_scmetrics |
REQUEST METRICS BASE |
Statistics | event_wlstats (in metrics element) event_wlmetrics |
REQUEST METRICS BASE |
Unit of work | uow_metrics | REQUEST METRICS BASE |
Usage
Sorts, hash joins, and OLAP functions are examples of operations which use a sort heap. Under normal conditions, the database manager will allocate sort heap using the value specified by the sortheap configuration parameter. If the amount of memory allocated to sort heaps exceeds the sort heap threshold (sheapthres configuration parameter), the database manager will allocate subsequent sort heaps using a value less than that specified by the sortheap configuration parameter.
OLAP functions which start after the sort heap threshold has been reached may not receive an optimum amount of memory to execute.
To improve sort, hash join, OLAP function performance, and overall system performance, modify the sort heap threshold and sort heap size configuration parameters.
If this element's value is high, increase the sort heap threshold (sheapthres).
post_threshold_peas - Partial early aggregation threshold monitor element
The number of times that partial early aggregation operations received less memory than requested due to sort heap threshold being exceeded.
Event Type | Logical Data Grouping | Monitor Switch |
---|---|---|
Activities | event_activity (reported in the details_xml document) | ACTIVITY METRICS BASE |
Statistics | event_scstats (reported in the metrics document) | REQUEST METRICS BASE |
Statistics | event_wlstats (reported in the metrics document) | REQUEST METRICS BASE |
Unit of work | Reported in the system_metrics document. | REQUEST METRICS BASE |
Connection | event_conn | - |
Statements | event_stmt | - |
Transactions | event_xact | - |
Package cache | Reported in the activity_metrics document. | ACTIVITY METRICS BASE |
Usage
Use this element in conjunction with the total_peas monitor element to determine if partial early aggregation operations are getting sufficient sort heap memory most of the time. If the ratio of the post_thresold_peas monitor element to the total_peas monitor element is high, your database performance may be sub-optimal. You should consider increasing the sort heap size or the sort heap threshold, or both.
post_threshold_peds - Partial early distincts threshold monitor element
The number of times that partial early distinct operations received less memory than requested due to sort heap threshold being exceeded.
Event Type | Logical Data Grouping | Monitor Switch |
---|---|---|
Activities | event_activity (reported in the details_xml document) | ACTIVITY METRICS BASE |
Statistics | event_scstats (reported in the metrics document) | REQUEST METRICS BASE |
Statistics | event_wlstats (reported in the metrics document) | REQUEST METRICS BASE |
Unit of work | Reported in the system_metrics document. | REQUEST METRICS BASE |
Connection | event_conn | - |
Statements | event_stmt | - |
Transactions | event_xact | - |
Package cache | Reported in the activity_metrics document. | ACTIVITY METRICS BASE |
Usage
Use this element in conjunction with the total_peds monitor element to determine if partial early distinct operations are getting sufficient sort heap memory most of the time. If the ratio of the post_thresold_peds monitor element to the total_peds monitor element is high, your database performance may be sub-optimal. You should consider increasing the sort heap size or the sort heap threshold, or both.
post_threshold_sorts - Post threshold sorts monitor element
The number of sorts that have requested heaps after the sort heap threshold has been exceeded.
Snapshot Level | Logical Data Grouping | Monitor Switch |
---|---|---|
Database Manager | db2 | Sort |
Event Type | Logical Data Grouping | Monitor Switch |
---|---|---|
Activities | event_activity (reported in the details_xml document) | ACTIVITY METRICS BASE |
Activities | event_activitymetrics | ACTIVITY METRICS BASE |
Statistics | event_scstats (reported in the metrics document) | REQUEST METRICS BASE |
Statistics | event_wlstats (reported in the metrics document) | REQUEST METRICS BASE |
Unit of work | Reported in the system_metrics document. | REQUEST METRICS BASE |
Package cache | Reported in the activity_metrics document. | ACTIVITY METRICS BASE |
Usage
Under normal conditions, the database manager will allocate sort heap using the value specified by the sortheap configuration parameter. If the amount of memory allocated to sort heaps exceeds the sort heap threshold (sheapthres configuration parameter), the database manager will allocate sort heap using a value less than that specified by the sortheap configuration parameter.
- Increase the sort heap threshold (sheapthres) or,
- Adjust applications to use fewer or smaller sorts via SQL query changes.
prefetch_wait_time - Time waited for prefetch monitor element
The time an application spent waiting for an I/O server (prefetcher) to finish loading pages into the buffer pool. The value is given in milliseconds.
Snapshot Level | Logical Data Grouping | Monitor Switch |
---|---|---|
Database | dbase | Buffer Pool |
Application | appl | Buffer Pool |
Event Type | Logical Data Grouping | Monitor Switch |
---|---|---|
Activities | event_activity (reported in details_xml document) | ACTIVITY METRICS BASE |
Package Cache | pkgcache (reported in
the metrics.xml document) pkgcache_metrics |
ACTIVITY METRICS BASE |
Unit of Work | uow (reported in the metrics.xml
document) uow_metrics |
REQUEST METRICS BASE |
Statistics | event_scstats (reported in the metrics document) | REQUEST METRICS BASE |
Statistics | event_wlstats (reported in the metrics document) | REQUEST METRICS BASE |
Database | event_db | Bufferpool |
Connection | event_db | Bufferpool |
- Usage
- This element can be used to experiment with changing the number of I/O servers, and I/O server sizes.
prefetch_waits - Prefetcher wait count monitor element
The number of times waited for an I/O server (prefetcher) to finish loading pages into the buffer pool.
Event Type | Logical Data Grouping | Monitor Switch |
---|---|---|
Activities | event_activity (reported in details_xml document) event_activitymetrics |
ACTIVITY METRICS BASE |
Package Cache | pkgcache (reported in
the metrics.xml document) pkgcache_metrics |
ACTIVITY METRICS BASE |
Unit of Work | uow (reported in the metrics.xml
document) uow_metrics |
REQUEST METRICS BASE |
Statistics | event_scstats (reported in the metrics document) | REQUEST METRICS BASE |
Statistics | event_wlstats (reported in the metrics document) | REQUEST METRICS BASE |
prep_time - Preparation time monitor element
Time in milliseconds required to prepare an SQL statement (if the activity is an SQL statement; otherwise, the value is 0).
Table Function | Monitor Element Collection Level |
---|---|
MON_GET_PKG_CACHE_STMT table function - Get SQL statement activity metrics in the package cache | Always collected |
MON_GET_PKG_CACHE_STMT_DETAILS table function - get detailed metrics for package cache entries | Always collected |
Event Type | Logical Data Grouping | Monitor Switch |
---|---|---|
Activities | event_activity | Always collected |
Package cache | - | Always collected |
Usage
The prep_time monitor element indicates how much time was spent preparing the SQL statement, if this activity was an SQL statement, when the statement was first introduced to the DB2 package cache. This preparation time is not part of the activity lifetime nor does it represent time spent during a specific invocation of the statement if the statement has already been cached in the package cache before that invocation.
prep_time_best - Statement best preparation time monitor element
The shortest amount of time in milliseconds that was required to prepare a specific SQL statement.
Snapshot Level | Logical Data Grouping | Monitor Switch |
---|---|---|
Dynamic SQL | dynsql | Basic |
Usage
Use this value in conjunction with prep_time_worst to identify SQL statements that are expensive to compile.
prep_time_worst - Statement worst preparation time monitor element
The longest amount of time in milliseconds that was required to prepare a specific SQL statement.
Snapshot Level | Logical Data Grouping | Monitor Switch |
---|---|---|
Dynamic SQL | dynsql | Basic |
Usage
Use this value in conjunction with prep_time_best to identify SQL statements that are expensive to compile.
prep_warning - Prepare warning SQLCODE monitor element
An SQLCODE warning value that indicates if the statement was compiled suboptimally
- 437 - Statement was compiled suboptimally
- 20516 - Access plan for the statement could not be preserved
Table function | Monitor element collection level |
---|---|
MON_GET_PKG_CACHE_STMT table function - Get package cache statement metrics | Always collected |
MON_GET_PKG_CACHE_STMT_DETAILS table function - Get package cache statement metrics as an XML document | Always collected |
Event type | Logical data grouping | Monitor switch |
---|---|---|
Package cache | pkgcache | Always collected |
prep_warning_reason - Prepare warning SQLCODE reason identifier monitor element
The reason code that explains why a statement was compiled suboptimally.
Table function | Monitor element collection level |
---|---|
MON_GET_PKG_CACHE_STMT table function - Get package cache statement metrics | Always collected |
MON_GET_PKG_CACHE_STMT_DETAILS table function - Get package cache statement metrics as an XML document | Always collected |
Event type | Logical data grouping | Monitor switch |
---|---|---|
Package cache | pkgcache | Always collected |
prev_uow_stop_time - Previous Unit of Work Completion Timestamp monitor element
This is the time the unit of work completed.
Table function | Monitor element collection level |
---|---|
MON_GET_CONNECTION table function - Get connection metrics | Always collected |
MON_GET_CONNECTION_DETAILS table function - Get detailed connection metrics | Always collected |
MON_GET_UNIT_OF_WORK table function - Get unit of work metrics | Always collected |
MON_GET_UNIT_OF_WORK_DETAILS table function - Get detailed unit of work metrics | Always collected |
Snapshot Level | Logical Data Grouping | Monitor Switch |
---|---|---|
Application | appl | Unit of Work, Timestamp |
DCS Application | dcs_appl | Unit of Work, Timestamp |
Event Type | Logical Data Grouping | Monitor Switch |
---|---|---|
Transaction | event_xact | Always collected |
- Usage
- You may use this element with uow_stop_time to calculate
the total elapsed time between COMMIT/ROLLBACK points, and with uow_start_time to
calculate the time spent in the application between units of work.
The time of one of the following actions:
- For applications currently within a unit of work, this is the time that the latest unit of work completed.
- For applications not currently within a unit of work (the application has completed a unit of work, but not yet started a new one), this is the stop time of the last unit of work that completed before the one that just completed. The stop time of the one just completed is indicated uow_stop_time.
- For applications within their first unit of work, this is the database connection request completion time.
primary_instance - Primary instance monitor element
Instance name of the member on the primary that is processing the log stream.
Table function | Monitor element collection level |
---|---|
MON_GET_HADR table function - Returns high availability disaster recovery (HADR) monitoring information | Always collected |
primary_log_file - Primary log file monitor element
The name of the current log file on this log stream on the primary HADR database.
Table function | Monitor element collection level |
---|---|
MON_GET_HADR table function - Returns high availability disaster recovery (HADR) monitoring information | Always collected |
primary_log_page - Primary log page monitor element
The page number in the current log file corresponding to the current log position on the primary HADR database. The page number is relative to the log file. For example, page zero is the beginning of the file.
Table function | Monitor element collection level |
---|---|
MON_GET_HADR table function - Returns high availability disaster recovery (HADR) monitoring information | Always collected |
primary_log_pos - Primary log position monitor element
The current log position on this log stream on the primary HADR database. This is a byte offset.
Table function | Monitor element collection level |
---|---|
MON_GET_HADR table function - Returns high availability disaster recovery (HADR) monitoring information | Always collected |
primary_log_time - Primary log time monitor element
The latest transaction timestamp on this log stream on the primary HADR database.
Table function | Monitor element collection level |
---|---|
MON_GET_HADR table function - Returns high availability disaster recovery (HADR) monitoring information | Always collected |
primary_member - Primary member monitor element
Member ID on the primary that is processing the log stream.
Table function | Monitor element collection level |
---|---|
MON_GET_HADR table function - Returns high availability disaster recovery (HADR) monitoring information | Always collected |
primary_member_host - Primary member host monitor element
The value of the hadr_local_host configuration parameter for the member on the primary that is processing the log stream.
Table function | Monitor element collection level |
---|---|
MON_GET_HADR table function - Returns high availability disaster recovery (HADR) monitoring information | Always collected |
priority - Priority value monitor element
Describes the relative capacity of a member to handle work. The higher the value, the more work a client should drive to that member.
Table Function | Monitor Element Collection Level |
---|---|
MON_GET_SERVERLIST table function - get member priority details | Always collected |
Usage Notes
- This monitor element represents the relative load of a member, also known as the weight. For example, if member A has a priority value of 80 and member B has a priority value of 40, this means that member A should receive double the amount of work that is given to member B.
- This value does not represent a percentage.
- The maximum value of this monitor element is 100.
priv_workspace_num_overflows - Private Workspace Overflows monitor element
The number of times that the private workspaces overflowed the bounds of its allocated memory.
Snapshot Level | Logical Data Grouping | Monitor Switch |
---|---|---|
Database | dbase | Basic |
Application | appl | Basic |
Event Type | Logical Data Grouping | Monitor Switch |
---|---|---|
Database | event_db | Always collected |
Connection | event_conn | Always collected |
- Usage
- Use this element with priv_workspace_size_top
to determine whether the
size of the private workspace needs to be increased to avoid overflowing.
Overflows of the private workspace may cause performance degradation
as well
as out of memory errors from the other heaps allocated out of agent
private
memory.
At the database level, the element reported will be from the same private workspace as that which was reported as having the same Maximum Private Workspace size. At the application level, it is the number of overflows for the workspace of every agent that have serviced the current application.
priv_workspace_section_inserts - Private Workspace Section Inserts monitor element
Inserts of SQL sections by an application into the private workspace.
Snapshot Level | Logical Data Grouping | Monitor Switch |
---|---|---|
Database | dbase | Basic |
Application | appl | Basic |
Event Type | Logical Data Grouping | Monitor Switch |
---|---|---|
Database | event_db | Always collected |
Connection | event_conn | Always collected |
- Usage
- The working copy of executable sections
are stored in the private workspace.
This counter indicates when a copy was not available and had to be inserted. At the database level, it is the cumulative total of all inserts for every application across all private workspaces in the database. At the application level, it is the cumulative total of all inserts for all sections in the private workspace for this application.
In a concentrator environment where agents are being associated with different applications, additional private workspace inserts may be required as a result of a new agent not having the required section available in its private workspace.
priv_workspace_section_lookups - Private Workspace Section Lookups monitor element
Lookups of SQL sections by an application in its agents' private workspace.
Snapshot Level | Logical Data Grouping | Monitor Switch |
---|---|---|
Database | dbase | Basic |
Application | appl | Basic |
Event Type | Logical Data Grouping | Monitor Switch |
---|---|---|
Database | event_db | Always collected |
Connection | event_conn | Always collected |
- Usage
- Each application has access to the private
workspace of the agent working
for it.
This counter indicates how many times the private workspace was accessed in order to locate a specific section for an application. At the database level, it is the cumulative total of all lookups for every application across all private workspaces in the database. At the application level, it is the cumulative total of all lookups for all sections in the private workspace for this application.
You can use this element in conjunction with Private Workspace Section Inserts to tune the size of the private workspace. The size of the private workspace is controlled by the applheapsz configuration parameter.
priv_workspace_size_top - Maximum Private Workspace Size monitor element
The largest size reached by the Private Workspace.
Snapshot Level | Logical Data Grouping | Monitor Switch |
---|---|---|
Database | dbase | Basic |
Application | appl | Basic |
Event Type | Logical Data Grouping | Monitor Switch |
---|---|---|
Database | event_db | Always collected |
Connection | event_conn | Always collected |
- Usage
- Each agent has a private workspace that the application
it is servicing
has access to. This element indicates the maximum number of bytes
required
from a private workspace by any agent servicing it. At the database
level,
it is the maximum number of bytes required of all the private workspaces
for
all agents attached to the current database. At the application level,
it
is the maximum size from among all of the agents' private workspaces
that
have serviced the current application.
When the private workspace overflows, memory is temporarily borrowed from other entities in agent private memory. This can result in memory shortage errors from these entities or possibly performance degradation. You can reduce the chance of overflow by increasing APPLHEAPSZ.
product_name - Product Name monitor element
Details of the version of the DB2 instance that is running.
Table Function | Monitor Element Collection Level |
---|---|
MON_GET_INSTANCE table function - Get instance level information | Always collected |
Snapshot Level | Logical Data Grouping | Monitor Switch |
---|---|---|
Database Manager | db2 | Basic |
progress_completed_units - Completed Progress Work Units monitor element
The number of work units for the current phase which have been completed.
Snapshot Level | Logical Data Grouping | Monitor Switch |
---|---|---|
Database Manager | progress | Basic |
- This element might not be included for all utilities.
- This element is expressed in units displayed by the progress_work_metric monitor element.
- Usage
- Use
this element to determine the amount of completed work within a phase.
By itself, this element can be used to monitor the activity of a running
utility.
This element should constantly increase as the utility executes. If
the progress_completed_units fails
to increase over a long period of time then the utility might be
stalled.
If progress_total_units is defined, then this element can be used to calculate the percentage of completed work:
percentage complete = progress_completed_units / progress_total_units* 100
progress_description - Progress Description monitor element
Describes the phase of work.
Snapshot Level | Logical Data Grouping | Monitor Switch |
---|---|---|
Database Manager | progress | Basic |
- DELETE
- LOAD
- REDO
- Usage
- Use this element to obtain a general description of a phase.
progress_list_attr - Current Progress List Attributes monitor element
This element describes how to interpret a list of progress elements.
Snapshot Level | Logical Data Grouping | Monitor Switch |
---|---|---|
Database Manager | progress list | Basic |
Usage
- SQLM_ELM_PROGRESS_LIST_ATTR_SERIAL - The elements in the list are to be interpreted as a set of serial phases meaning that completed work must equal the total work for element n before the completed work of element n+1 is first updated. This attribute is used to describe progress of a task which consists of a set of serial phases where a phase must fully complete before the next phase begins.
- SQLM_ELM_PROGRESS_LIST_ATTR_CONCURRENT - Any element in the progress list can be updated at any time.
Use this element to determine how the elements of a progress list will be updated.
progress_list_cur_seq_num - Current Progress List Sequence Number monitor element
If the utility contains multiple sequential phases, then this element displays the number of the current phase.
Snapshot Level | Logical Data Grouping | Monitor Switch |
---|---|---|
Database Manager | progress_list | Basic |
- Usage
- Use this element to determine the current phase of a multiphase utility. See progress_seq_num - Progress Sequence Number monitor element.
progress_seq_num - Progress Sequence Number monitor element
Phase number.
Snapshot Level | Logical Data Grouping | Monitor Switch |
---|---|---|
Database Manager | progress | Basic |
- Usage
- Use this element to determine the order of phases within a multiphase utility. The utility will execute phases serially in order of increasing progress sequence numbers. The current phase of a multiphase utility can be found by matching the progress_seq_num with the value of progress_list_current_seq_num.
progress_start_time - Progress Start Time monitor element
A timestamp representing the start of the phase.
Snapshot Level | Logical Data Grouping | Monitor Switch |
---|---|---|
Database Manager | progress | Basic |
- Usage
- Use this element to determine when a phase started. This element is omitted if the phase has not yet begun.
progress_total_units - Total Progress Work Units monitor element
Total amount of work to perform in order for the phase to be complete.
Snapshot Level | Logical Data Grouping | Monitor Switch |
---|---|---|
Database Manager | progress | Basic |
Some utilities might not be able to quantify the total work so they will continuously update this element. Other utilities might not be able to provide an estimate for the total work so this element might be omitted entirely.
This element is expressed in units displayed by the progress_work_metric monitor element.
- Usage
- Use this element to
the determine the total amount of work in the phase.
Use this element with progress_completed_units to calculate
the percentage
of work completed within a phase:
percentage complete = progress_completed_units / progress_total_units* 100
progress_work_metric - Progress Work Metric monitor element
The metric for interpreting the progress_total_units and progress_completed_units elements.
Snapshot Level | Logical Data Grouping | Monitor Switch |
---|---|---|
Database Manager | progress | Basic |
- SQLM_WORK_METRIC_BYTES
- SQLM_WORK_METRIC_EXTENTS
- This element might not be included for all utilities.
- Values for this element can be found in sqlmon.h
- Usage
- Use this element to determine what progress_total_units and progress_completed_units use as their reporting metric.
pseudo_deletes - Pseudo deletes monitor element
The number of keys that have been marked pseudo deleted.
Table Function | Monitor Element Collection Level |
---|---|
MON_GET_INDEX table function - Get index metrics | Always collected |
pseudo_empty_pages - Pseudo empty pages monitor element
The number of pages that have been identified as pseudo empty. Pseudo empty pages are pages where all the keys have been pseudo deleted.
Table Function | Monitor Element Collection Level |
---|---|
MON_GET_INDEX table function - Get index metrics | Always collected |