Configuring a TCP/IP network (Linux)
The topics in this section detail how to configure a Transmission Control Protocol/Internet Protocol over Ethernet (TCP/IP) network.
No additional hardware, firmware, or software is required to install Db2® pureScale® Feature on a TCP/IP network. The only requirement is to have a network that is accessible by all the hosts. The hosts must be able to access each other, and all hosts must be on the same subnet. In addition, the /etc/hosts file entries on each of the hosts in the cluster must adhere to the following format: <IP_Address> <fully_qualified_name> <short_name>.
For example, in a 2 member/2 CF pureScale environment running on a TCP/IP network where there is both a public network and a private cluster interconnect network, the /etc/hosts configuration file might resemble the following:
184.108.40.206 member1.example.com member1 220.127.116.11 member2.example.com member2 18.104.22.168 cfhost1.example.com cfhost1 22.214.171.124 cfhost2.example.com cfhost2 10.1.2.1 member1-priv.example.com member1-priv 10.1.2.2 member2-priv.example.com member2-priv 10.1.2.3 cfhost1-priv.example.com cfhost1-priv 10.1.2.4 cfhost2-priv.example.com cfhost2-priv
It is a best practice that the TCP/IP network is 10 Gb or higher. However, if the workload has only modest network usage requirements, to avoid the Db2 product from blocking a network slower than 10 Gb, enable the registry variable DB2_SD_ALLOW_SLOW_NETWORK.
You set up your TCP/IP network as you normally would, set up all hosts on the same subnet, and test the host name resolution and connectivity.
If multiple adapter ports are assigned to each member or CF, ensure those network interfaces are bonded so that only the bonded interface is specified as the NETNAME column in the db2instance -list output. All NETNAME listed in the output should be in the same IP subnet. For a geographically dispersed Db2 pureScale cluster (GDPC), this single IP subnet is mandatory to setup IBM Spectrum Scale described in Getting the cluster installed and running in a GDPC environment.