Format of the Db2 node configuration file
The db2nodes.cfg file is used to define the database partition servers that participate in a Db2 instance. The db2nodes.cfg file is also used to specify the IP address or host name of a high-speed interconnect, if you want to use a high-speed interconnect for database partition server communication.
dbpartitionnum hostname logicalport netname resourcesetname
dbpartitionnum, hostname, logicalport, netname,
and resourcesetname are defined in the following
section. dbpartitionnum hostname computername logicalport netname resourcesetname
On Windows operating systems, these
entries to the db2nodes.cfg are added by the db2ncrt or
START DBM ADD DBPARTITIONNUM commands. The entries can also be modified
by the db2nchg command. You should not add these
lines directly or edit this file. - dbpartitionnum
- A unique number, between 0 and 999,
that identifies a database partition server in a partitioned database
system.
To scale your partitioned database system, you add an entry for each database partition server to the db2nodes.cfg file. The dbpartitionnum value that you select for additional database partition servers must be in ascending order, however, gaps can exist in this sequence. You can choose to put a gap between the dbpartitionnum values if you plan to add logical partition servers and want to keep the nodes logically grouped in this file.
This entry is required.
- hostname
- The TCP/IP host name of the database partition server for use by the FCM. This entry is
required. Canonical hostname is strongly
recommended.
When the system has more than one network interface card installed and the hostname that is used in the db2nodes.cfg file cannot be resolved to be the default host of the system, it might be treated as a remote host. This setup imposes a limitation that database migration cannot be done successfully because the local database directory cannot be found if the instance is not started. Therefore, HADR might require the hostname to match the name that is used by the operating system to identify the host to make migration possible. In addition to this, the operating system name of the host must be specified in db2nodes.cfg when it is running in a Tivoli® SA MP, PowerHA® SystemMirror®, and other high availability environments, including the Db2 fault monitor.
Starting with Db2 Version 9.1, both TCP/IPv4 and TCP/IPv6 protocols are supported. The method to resolve host names has changed.
While the method used in pre-Version 9.1 releases resolves the string as defined in the db2nodes.cfg file, the method in Version 9.1 or later tries to resolve the Fully Qualified Domain Names (FQDN) when short names are defined in the db2nodes.cfg file. Specifying short configured for fully qualified host names, this may lead to unnecessary delays in processes that resolve host names.
To avoid any delays in Db2 commands that require host name resolution, use any of the following workarounds:- If short names are specified in the db2nodes.cfg files and the operating system host name file, specify the short name and the fully qualified domain name for host name in the operating system host files.
- To use only IPv4 addresses when you know that the Db2 server listens on an
IPv4 port, issue the following command:
db2 catalog tcpip4 node db2tcp2 remote 192.0.32.67 server db2inst1 with "Look up IPv4 address from 192.0.32.67"
- To use only IPv6 addresses when you know that the Db2 server listens on an
IPv6 port, issue the following command:
db2 catalog tcpip6 node db2tcp3 1080:0:0:0:8:800:200C:417A server 50000 with "Look up IPv6 address from 1080:0:0:0:8:800:200C:417A"
- logicalport
- Specifies the logical port number for the database partition server. This field is used to
specify a particular database partition server on a workstation that is running logical database
partition servers.
Db2 reserves a port range (for example, 60000 - 60003) in the /etc/services file for interpartition communications at the time of installation. This logicalport field in db2nodes.cfg specifies which port in that range you want to assign to a particular logical partition server.
If there is no entry for this field, the default is 0. However, if you add an entry for the netname field, you must enter a number for the logicalport field.
If you are using logical database partitions, the logicalport value you specify must start at 0 and continue in ascending order (for example, 0,1,2).
Furthermore, if you specify a logicalport entry for one database partition server, you must specify a logicalport for each database partition server listed in your db2nodes.cfg file.
Each physical server must have a logical node 0.
This field is optional only if you are not using logical database partitions or a high speed interconnect.
- netname
- Specifies the host name or the IP address of the high speed interconnect
for FCM communication.
If an entry is specified for this field, all communication between database partition servers (except for communications as a result of the db2start, db2stop, and db2_all commands) is handled through the high speed interconnect.
This parameter is required only if you are using a high speed interconnect for database partition communications.
- resourcesetname
- The resourcesetname defines the operating system
resource that the node should be started in. The resourcesetname is
for process affinity support, used for Multiple Logical Nodes (MLNs).
This support is provided with a string type field formerly known as
quadname.
This parameter is only supported on AIX®, HP-UX, and Solaris Operating System.
On AIX, this concept is known as "resource sets" and on Solaris Operating System it is called "projects". Refer to your operating systems documentation for more information about resource management.
On HP-UX, the resourcesetname parameter is the name of a PRM group. Refer to "HP-UX Process Resource Manager. User Guide. (B8733-90007)" documentation from HP for more information.
On Windows operating systems, process affinity for a logical node can be defined through the DB2PROCESSORS registry variable.
On Linux operating systems, the resourcesetname column defines a number that corresponds to a Non-Uniform Memory Access (NUMA) node on the system. The system utility numactl must be available as well as a 2.6 Kernel with NUMA policy support.
The netname parameter must be specified if the resourcesetname parameter is used.
Example configurations
Use the following example configurations to determine the appropriate configuration for your environment.
- One computer, four database partitions servers
- If you are not using a clustered environment and want to have
four database partition servers on one physical workstation called
ServerA
, update the db2nodes.cfg file as follows:0 ServerA 0 1 ServerA 1 2 ServerA 2 3 ServerA 3
- Two computers, one database partition server per computer
- If you want your partitioned database system to contain two physical
workstations, called
ServerA
andServerB
, update the db2nodes.cfg file as follows:0 ServerA 0 1 ServerB 0
- Two computers, three database partition server on one computer
- If you want your partitioned database system to contain two physical
workstations, called
ServerA
andServerB
, andServerA
is running 3 database partition servers, update the db2nodes.cfg file as follows:4 ServerA 0 6 ServerA 1 8 ServerA 2 9 ServerB 0
- Two computers, three database partition servers with high speed switches
- If you want your partitioned database system to contain two computers,
called
ServerA
andServerB
(withServerB
running two database partition servers), and use a high speed interconnect calledswitch1
andswitch2
, update the db2nodes.cfg file as follows:0 ServerA 0 switch1 1 ServerB 0 switch2 2 ServerB 1 switch2
Examples using resourcesetname
- This example shows the usage of resourcesetname when there is no high speed interconnect in the configuration.
- The netname is the fourth column and a hostname also can be specified on that column where there is no switch name and you want to use resourcesetname. The fifth parameter is resourcesetname if it is defined. The resource group specification can only show as the fifth column in the db2nodes.cfg file. This means that for you to specify a resource group, you must also enter a fourth column. The fourth column is intended for a high speed switch.
- If you do not have a high speed switch or you do not want to use it, you must then enter the hostname (same as the second column). In other words, the Db2 database management system does not support column gaps (or interchanging them) in the db2nodes.cfg files. This restriction already applies to the first three columns, and now it applies to all five columns.
AIX example
Here is an example of how to set up the resource set for AIX operating systems.
In this example, there is one physical node with 32 processors and 8 logical database partitions (MLNs). This example shows how to provide process affinity to each MLN.
- Define resource sets by using the AIX mkrset command:
mkrset -c 0 1 2 3 DB2/MLN1 mkrset -c 4 5 6 7 DB2/MLN2 mkrset -c 8 9 10 11 DB2/MLN3 mkrset -c 12 13 14 15 DB2/MLN4 mkrset -c 16 17 18 19 DB2/MLN5 mkrset -c 20 21 22 23 DB2/MLN6 mkrset -c 24 25 26 27 DB2/MLN7 mkrset -c 28 29 30 31 DB2/MLN8
- Give instance permissions to use resource sets:
chuser capabilities= CAP_BYPASS_RAC_VMM,CAP_PROPAGATE,CAP_NUMA_ATTACH db2inst1
- Add the resource set name as the fifth column in db2nodes.cfg:
1 regatta 0 regatta DB2/MLN1 2 regatta 1 regatta DB2/MLN2 3 regatta 2 regatta DB2/MLN3 4 regatta 3 regatta DB2/MLN4 5 regatta 4 regatta DB2/MLN5 6 regatta 5 regatta DB2/MLN6 7 regatta 6 regatta DB2/MLN7 8 regatta 7 regatta DB2/MLN8
HP-UX example
- Edit GROUP section of /etc/prmconf:
OTHERS:1:4:: db2prm1:50:24:: db2prm2:51:24:: db2prm3:52:24:: db2prm4:53:24::
- Add instance owner entry to /etc/prmconf:
db2inst1::::OTHERS,db2prm1,db2prm2,db2prm3,db2prm4
- Initialize groups and enable CPU manager by entering the following command:
prmconfig -i prmconfig -e CPU
- Add PRM group names as a fifth column to db2nodes.cfg:
1 voyager 0 voyager db2prm1 2 voyager 1 voyager db2prm2 3 voyager 2 voyager db2prm3 4 voyager 3 voyager db2prm4
Linux example
On Linux operating systems, the resourcesetname column defines a number that corresponds to a Non-Uniform Memory Access (NUMA) node on the system. The numactl system utility must be available in addition to a 2.6 kernel with NUMA policy support. Refer to the man page for numactl for more information about NUMA support on Linux operating systems.
This example shows how to set up a four node NUMA computer with each logical node associated with a NUMA node.
- Ensure that NUMA capabilities exist on your system.
- Issue the following command:
Output similar to the following displays:$ numactl --hardware
available: 4 nodes (0-3) node 0 size: 1901 MB node 0 free: 1457 MB node 1 size: 1910 MB node 1 free: 1841 MB node 2 size: 1910 MB node 2 free: 1851 MB node 3 size: 1905 MB node 3 free: 1796 MB
- In this example, there are four NUMA nodes on the system. Edit
the db2nodes.cfg file as follows to associate
each MLN with a NUMA node on the system:
0 hostname 0 hostname 0 1 hostname 1 hostname 1 2 hostname 2 hostname 2 3 hostname 3 hostname 3
Solaris example
Here is an example of how to set up the project for Solaris Version 9.
In this example, there is 1 physical node with 8 processors: one CPU will be used for the default project, three (3) CPUs will used by the Application Server, and four (4) CPUs for Db2. The instance name is db2inst1.
- Create a resource pool configuration file using an editor. For
this example, the file will be called pool.db2.
Here's the content:
create system hostname create pset pset_default (uint pset.min = 1) create pset db0_pset (uint pset.min = 1; uint pset.max = 1) create pset db1_pset (uint pset.min = 1; uint pset.max = 1) create pset db2_pset (uint pset.min = 1; uint pset.max = 1) create pset db3_pset (uint pset.min = 1; uint pset.max = 1) create pset appsrv_pset (uint pset.min = 3; uint pset.max = 3) create pool pool_default (string pool.scheduler="TS"; boolean pool.default = true) create pool db0_pool (string pool.scheduler="TS") create pool db1_pool (string pool.scheduler="TS") create pool db2_pool (string pool.scheduler="TS") create pool db3_pool (string pool.scheduler="TS") create pool appsrv_pool (string pool.scheduler="TS") associate pool pool_default (pset pset_default) associate pool db0_pool (pset db0_pset) associate pool db1_pool (pset db1_pset) associate pool db2_pool (pset db2_pset) associate pool db3_pool (pset db3_pset) associate pool appsrv_pool (pset appsrv_pset)
- Edit the /etc/project file to add the Db2 projects and appsrv
project as follows:
system:0:::: user.root:1:::: noproject:2:::: default:3:::: group.staff:10:::: appsrv:4000:App Serv project:root::project.pool=appsrv_pool db2proj0:5000:DB2 Node 0 project:db2inst1,root::project.pool=db0_pool db2proj1:5001:DB2 Node 1 project:db2inst1,root::project.pool=db1_pool db2proj2:5002:DB2 Node 2 project:db2inst1,root::project.pool=db2_pool db2proj3:5003:DB2 Node 3 project:db2inst1,root::project.pool=db3_pool
- Create the resource pool:
# poolcfg -f pool.db2
. - Activate the resource pool:
# pooladm -c
- Add the project name as the fifth column in db2nodes.cfg file:
0 hostname 0 hostname db2proj0 1 hostname 1 hostname db2proj1 2 hostname 2 hostname db2proj2 3 hostname 3 hostname db2proj3