Explicit hierarchical locking for Db2 pureScale environments

Explicit hierarchical locking (EHL) for IBM® Db2 pureScale Feature takes advantage of the implicit internal locking hierarchy that exists between table locks, row locks, and page locks. EHL functionality helps avoid most communication and data sharing memory usage for tables.

Table locks supersede row locks or page locks in the locking hierarchy. When a table lock is held in super exclusive mode, EHL enhances performance for Db2 pureScale instances by not propagating row locks, page locks, or page writes to the caching facility (CF).

EHL is not enabled by default in Db2 pureScale environments. However, it can be enabled or disabled by using the opt_direct_wrkld database configuration parameter. When turned on, tables which are detected to be accessed primarily by a single member are optimized to avoid CF communication. The table lock is held in super exclusive mode on such a member while this state is in effect. Attempts to access the table by a remote member automatically terminates this mode.

With explicit hierarchical locking (EHL) enabled for an IBM® Db2® pureScale® database, objects in the NOT_SHARED state are switched to the SHARED state if an extent reclaim operation that runs on a remote member needs to move extents that belong to this object. The EHL state remains unchanged for objects that exist locally on the member that runs the extent movement operation. In other words, switching from NOT_SHARED to SHARED is only needed when the object resides on a remote member, relative when movement operation.

There are two EHL states:

Table 1. EHL table states
Table state Description
NOT_SHARED / DIRECTED_ACCESS Refers to a table that is in explicit hierarchical locking state. Row locks, page locks, and page writes are managed only on the local member.
SHARED/FULLY SHARED Refers to a table that is not in explicit hierarchical locking state. Row locks, page locks, and page writes are coordinated by using the CF.

Regular tables, range partitioned tables, or partitioned indexes might exist in one of the prior states, or in a transitional state between the SHARED and NOT_SHARED states:

EHL is useful for the following types of workloads, as they are able to take advantage of this optimization:
  • Grid deployments, where each application has affinities to a single member and where most of its data access is only for these particular applications. In a database grid environment a Db2 pureScale cluster has multiple databases, but any single database is accessed only by a single member. In this case, all the tables in each database move into the NOT_SHARED state.
  • Partitioned or partitionable workloads where work is directed such that certain tables are accessed only by a single member. These workloads include directed access workloads where applications from different members do not access the same table.
  • One member configurations or batch window workloads that use only a single member. A system is set up to have nightly batch processing with almost no OLTP activity. Because the batch workload is often run by a single application, it is the only one accessing tables and they can move into the NOT_SHARED state.

An application can be partitioned so that only certain tables are accessed by the connections to a single member. Using this partitioning approach, these tables move into the NOT_SHARED state when the opt_direct_wrkld configuration parameter is enabled.

EHL for directed workloads must avoid workload balancing (WLB). Instead, use client affinity and the member subsetting capability for directed workloads that do not use WLB.