Network topology configuration support for Db2 pureScale environments
Requirements for using multiple communication adapter ports
- Multiple communication adapter ports
are
supported on SLES,
RHEL and AIX® on
RoCE networks and
InfiniBand networks with RDMA. For multiple communication adapter ports using the TCP/IP protocol
over Ethernet, adapter ports on each member and CF must be bonded to form
a single network interface. Note:
- VLAN id must be the same across all the switches used in each particular cluster.
- Currently, Db2 pureScale only supports RoCE v1.
- For an optimal high availability and performance configuration for production systems, members must reside in their own host or LPAR.
- The maximum number of communication
adapter ports supported is
four. The two validated and supported configurations for using multiple communication adapter ports
are:
- Four physical communication adapters, with one adapter port used by the CF or member on each adapter.
- Two physical communication adapters, with two adapter ports on each adapter used by the CF or member.
Note: You can enhance high availability of adapter by using multiple physical communication adapters to connect to more than one switch. Using multiple communication adapter ports improves throughput. - During installation and configuration, the cluster interconnect netnames you specify in the Db2 Setup wizard, or with the db2icrt and db2iupdt commands, are updated in the node configuration file, db2nodes.cfg. Host names that are not selected will not be listed in db2nodes.cfg.
- At least one switch is required in a Db2 pureScale environment.
- Two switches are required to support switch failover in a Db2 pureScale environment.
- IP subnets
- Each communication adapter port must be on a different subnetwork, also referred to as a subnet.
- If there are an equal number of communication adapter ports, each CF or member must be on the same set of subnets.
- If one CF server or member has fewer adapter ports than another, the one with more adapter ports must be on all the subnets as the CF or member with fewer adapter ports is on.
- If your members have only a single adapter, the communication adapter ports on all members must be on the same IP subnet. For simplicity, only use the same the IP subnet of the first communication adapter port of the CF. Members do not need to be on different IP subnets for availability reasons (adapter or switch failure) because the high speed communication between members and CFs through the switches uses different address resolution protocol than traditional interconnect (for example, Ethernet).
- If you have multiple adapters on members and CFs, see Figure 2.
- The netmask must be the same for all CFs and members.
- Communication adapter ports that are used by applications other than Db2 applications must use a different subnet than any member or CF on the host.
One-switch configuration with multiple RDMA communication adapter ports
All CF and member hosts in a one-switch configuration are connected to the same switch by multiple communication adapter ports. The one-switch configuration is the simplest Db2 pureScale environment with multiple communication adapter ports to set up. The redundant communication adapter ports connecting each CF or member to the switch increases the bandwidth and the redundant adapter ports improve fault tolerance in the event one of the links fail. As long as a CF or member has one functional communication adapter port and a public Ethernet connection the CF or member remains operational. The following table is one-switch network topology example with multiple communication adapter ports to each CFs.
Host | Adapter port | Network interface name | Cluster interconnect netname | IP address | Subnetwork mask (Netmask) | Subnet |
---|---|---|---|---|---|---|
PrimaryCF | 0 | eth0 | PrimaryCF-netname1 | 10.111.0.1 | 255.255.255.0 | 10.111.0.0 |
PrimaryCF | 1 | eth1 | PrimaryCF-netname2 | 10.111.1.1 | 255.255.255.0 | 10.111.1.0 |
PrimaryCF | 0 | eth2 | PrimaryCF-netname3 | 10.111.2.1 | 255.255.255.0 | 10.111.2.0 |
PrimaryCF | 1 | eth3 | PrimaryCF-netname4 | 10.111.3.1 | 255.255.255.0 | 10.111.3.0 |
SecondaryCF | 0 | eth0 | SecondaryCF-netname1 | 10.111.0.2 | 255.255.255.0 | 10.111.0.0 |
SecondaryCF | 1 | eth1 | SecondaryCF-netname2 | 10.111.1.2 | 255.255.255.0 | 10.111.1.0 |
SecondaryCF | 0 | eth2 | SecondaryCF-netname3 | 10.111.2.2 | 255.255.255.0 | 10.111.2.0 |
SecondaryCF | 1 | eth3 | SecondaryCF-netname4 | 10.111.3.2 | 255.255.255.0 | 10.111.3.0 |
Member0 | 0 | eth0 | Member0-netname1 | 10.111.0.101 | 255.255.255.0 | 10.111.0.0 |
Member0 | 0 | eth1 | Member0-netname2 | 10.111.1.101 | 255.255.255.0 | 10.111.1.0 |
Member1 | 0 | eth0 | Member1-netname1 | 10.111.0.102 | 255.255.255.0 | 10.111.0.0 |
Member1 | 0 | eth1 | Member1-netname2 | 10.111.1.102 | 255.255.255.0 | 10.111.1.0 |
Member2 | 0 | eth0 | Member2-netname1 | 10.111.0.103 | 255.255.255.0 | 10.111.0.0 |
Member2 | 0 | eth1 | Member2-netname2 | 10.111.1.103 | 255.255.255.0 | 10.111.1.0 |
Member3 | 0 | eth0 | Member3-netname1 | 10.111.0.104 | 255.255.255.0 | 10.111.0.0 |
Member3 | 0 | eth1 | Member3-netname2 | 10.111.1.104 | 255.255.255.0 | 10.111.1.0 |
Two-switch configuration with multiple RDMA communication adapter ports
- Half of the communication adapter ports must be connected to each switch.
- The switches must be connected to each other by two or more inter-switch links. Connect the two switches together by half the total number of cables that connect CFs and members, members to the switches to improve bandwidth and fault tolerance.
- Switch failover capability must be configured for the switch so that if one switch fails, the surviving switch and hosts connected to it are not impacted.
- Distribute the members evenly between the switches so that each switch is cabled to the same number of members.
- If an adapter of a CF or member fails, it can still communicate with each switch through the other surviving adapter, and a subsequent switch failure would not bring down the Db2 pureScale environment.
- If a switch fails, a subsequent adapter failure on a CF would still leave the primary and secondary CF intact.
Host | Adapter port | Network interface name | Cluster interconnect netname | Connected to switch | IP address | Subnetwork mask (Netmask) | Subnet |
---|---|---|---|---|---|---|---|
PrimaryCF | 0 | eth0 | PrimaryCF-netname1 | 1 | 10.222.0.1 | 255.255.255.0 | 10.222.0.0 |
PrimaryCF | 1 | eth1 | PrimaryCF-netname2 | 2 | 10.222.1.1 | 255.255.255.0 | 10.222.1.0 |
PrimaryCF | 0 | eth2 | PrimaryCF-netname3 | 1 | 10.222.2.1 | 255.255.255.0 | 10.222.2.0 |
PrimaryCF | 1 | eth3 | PrimaryCF-netname4 | 2 | 10.222.3.1 | 255.255.255.0 | 10.222.3.0 |
SecondaryCF | 0 | eth0 | SecondaryCF-netname1 | 1 | 10.222.0.2 | 255.255.255.0 | 10.222.0.0 |
SecondaryCF | 1 | eth1 | SecondaryCF-netname2 | 2 | 10.222.1.2 | 255.255.255.0 | 10.222.1.0 |
SecondaryCF | 0 | eth2 | SecondaryCF-netname3 | 1 | 10.222.2.2 | 255.255.255.0 | 10.222.2.0 |
SecondaryCF | 1 | eth3 | SecondaryCF-netname4 | 2 | 10.222.3.2 | 255.255.255.0 | 10.222.3.0 |
Member0 | 0 | eth0 | Member0-netname1 | 1 | 10.222.0.101 | 255.255.255.0 | 10.222.0.0 |
Member0 | 1 | eth1 | Member0-netname2 | 2 | 10.222.1.101 | 255.255.255.0 | 10.222.1.0 |
Member1 | 0 | eth0 | Member1-netname1 | 1 | 10.222.0.102 | 255.255.255.0 | 10.222.0.0 |
Member1 | 1 | eth1 | Member1-netname2 | 2 | 10.222.1.102 | 255.255.255.0 | 10.222.1.0 |
Member2 | 0 | eth0 | Member2-netname1 | 1 | 10.222.0.103 | 255.255.255.0 | 10.222.0.0 |
Member2 | 1 | eth1 | Member2-netname2 | 2 | 10.222.1.103 | 255.255.255.0 | 10.222.1.0 |
Member3 | 0 | eth0 | Member3-netname1 | 1 | 10.222.0.104 | 255.255.255.0 | 10.222.0.0 |
Member3 | 1 | eth1 | Member3-netname2 | 2 | 10.222.1.104 | 255.255.255.0 | 10.222.1.0 |
Configurations without multiple communication adapter ports
The following section is for illustration purposes. Configurations without multiple communication adapter ports do not offer redundancy on the switch.
In Db2 pureScale environments without multiple communication adapter ports, all member and CF communication adapter ports must be on the same subnet. As additional members are added, more CF resources are required to handle the members requests. If the number or amount of time members wait for CFs as members are added start to affect service level agreements of applications, consider adopting a multiple communication adapter ports topology.
Host | Adapter port | Network interface name | Cluster interconnect netname | IP address | Subnetwork mask (Netmask) | Subnet |
---|---|---|---|---|---|---|
PrimaryCF | 0 | eth0 | PrimaryCF-netname1 | 10.123.0.1 | 255.255.255.0 | 10.123.0.0 |
SecondaryCF | 0 | eth0 | SecondaryCF-netname1 | 10.123.0.2 | 255.255.255.0 | 10.123.0.0 |
Member0 | 0 | eth0 | Member0-netname | 10.123.0.101 | 255.255.255.0 | 10.123.0.0 |
Member1 | 0 | eth0 | Member1-netname | 10.123.0.102 | 255.255.255.0 | 10.123.0.0 |
Member2 | 0 | eth0 | Member2-netname | 10.123.0.103 | 255.255.255.0 | 10.123.0.0 |
Member3 | 0 | eth0 | Member3-netname | 10.123.0.104 | 255.255.255.0 | 10.123.0.0 |
Considerations for configuring adapter ports in a TCP/IP network
- The first network is used by clients to connect to the database(s).
- The second network is the cluster inter-connect dedicated for CF and member communication.