Client-server processing model

Both local and remote application processes can work with the same database. A remote application is one that initiates a database action from a machine that is remote from the machine on which the database server resides. Local applications are directly attached to the database at the server machine.

How client connections are managed depends on whether the connection concentrator is on or off. The connection concentrator is on whenever the value of the max_connections database manager configuration parameter is larger than the value of the max_coordagents configuration parameter.
  • If the connection concentrator is off, each client application is assigned a unique engine dispatchable unit (EDU) called a coordinator agent that coordinates the processing for that application and communicates with it.
  • If the connection concentrator is on, each coordinator agent can manage many client connections, one at a time, and might coordinate the other worker agents to do this work. For internet applications with many relatively transient connections, or applications with many relatively small transactions, the connection concentrator improves performance by allowing many more client applications to be connected concurrently. It also reduces system resource use for each connection.
In Figure 1, each circle in the Db2® server represents an EDU that is implemented using operating system threads.
Figure 1. Client-server processing model overview
Figure showing an overview of the client-server processing model
  • At A1, a local client establishes communications through db2ipccm.
  • At A2, db2ipccm works with a db2agent EDU, which becomes the coordinator agent for application requests from the local client.
  • At A3, the coordinator agent contacts the client application to establish shared memory communications between the client application and the coordinator.
  • At A4, the application at the local client connects to the database.
  • At B1, a remote client establishes communications through db2tcpcm. If another communications protocol was chosen, the appropriate communications manager is used.
  • At B2, db2tcpcm works with a db2agent EDU, which becomes the coordinator agent for the application and passes the connection to this agent.
  • At B4, the coordinator agent contacts the remote client application.
  • At B5, the remote client application connects to the database.
Note also that:
  • Worker agents carry out application requests. There are four types of worker agents: active coordinator agents, active subagents, associated subagents, and idle agents.
  • Each client connection is linked to an active coordinator agent.
  • In a partitioned database environment, or an environment in which intrapartition parallelism is enabled, the coordinator agents distribute database requests to subagents (db2agntp).
  • There is an agent pool (db2agent) where idle agents wait for new work.
  • Other EDUs manage client connections, logs, two-phase commit operations, backup and restore operations, and other tasks.
Figure 2 shows additional EDUs that are part of the server machine environment. Each active database has its own shared pool of prefetchers (db2pfchr) and page cleaners (db2pclnr), and its own logger (db2loggr) and deadlock detector (db2dlock).
Figure 2. EDUs in the database server
Figure showing EDUs in the database server

Fenced user-defined functions (UDFs) and stored procedures, which are not shown in the figure, are managed to minimize costs that are associated with their creation and destruction. The default value of the keepfenced database manager configuration parameter is YES, which keeps the stored procedure process available for reuse at the next procedure call.

Note: Unfenced UDFs and stored procedures run directly in an agent's address space for better performance. However, because they have unrestricted access to the agent's address space, they must be rigorously tested before being used.
Figure 3 shows the similarities and differences between the single database partition processing model and the multiple database partition processing model.
Figure 3. Process model for multiple database partitions
Figure showing the process model for multiple database partitions

In a multiple database partition environment, the database partition on which the CREATE DATABASE command was issued is called the catalog database partition. It is on this database partition that the system catalog tables are stored. The system catalog is a repository of all of the information about objects in the database.

As shown in Figure 3, because Application A creates the PROD database on Node0000, the catalog for the PROD database is also created on this database partition. Similarly, because Application B creates the TEST database on Node0001, the catalog for the TEST database is created on this database partition. It is a good idea to create your databases on different database partitions to balance the extra activity that is associated with the catalog for each database across the database partitions in your environment.

There are additional EDUs (db2pdbc and db2fcmd) that are associated with the instance, and these are found on each database partition in a multiple database partition environment. These EDUs are needed to coordinate requests across database partitions and to enable the fast communication manager (FCM).

There is an additional EDU (db2glock) that is associated with the catalog database partition. This EDU controls global deadlocks across the database partitions on which the active database is located.

Each connect request from an application is represented by a connection that is associated with a coordinator agent. The coordinator agent is the agent that communicates with the application, receiving requests and sending replies. It can satisfy a request itself or coordinate multiple subagents to work on the request. The database partition on which the coordinator agent resides is called the coordinator database partition of that application.

Parts of the database requests from an application are sent by the coordinator database partition to subagents at the other database partitions. All of the results are consolidated at the coordinator database partition before being sent back to the application.

Any number of database partitions can be configured to run on the same machine. This is known as a multiple logical partition configuration. Such a configuration is very useful on large symmetric multiprocessor (SMP) machines with very large main memory. In this environment, communications between database partitions can be optimized to use shared memory and semaphores.