Network planning
As you plan for installing systems in your data center, review information about the network resources and your configuration options. Your network administrator is the intended audience for this network planning.
The initial setup of IBM Fusion HCI System involves connecting the appliance to your data center’s network. The appliance includes two high-speed switches that are connected to your core network through one port channel. This connection acts as the gateway between the IBM Fusion appliance and your network. It enables administration of the appliance and OpenShift®, and is also used for network traffic in and out of the cluster.
Your network team must prepare your network before the installation of the IBM Fusion appliance. An IBM Systems Service Representative (SSR) does the initial configuration of the appliance, and as a final step, they connect the appliance to your pre-configured network by using the information you provided. As such, it is important that you complete the network configuration before SSR visit.
To download the worksheets, see IBM Storage Fusion HCI Installation worksheets.
When you fill the worksheet, check with your network team about whether the CIDR ranges that you plan to use are free on your network.
The network planning involves three key steps:
- Configure DNS and DHCP.
- Configure NTP server.
- Planning the connection between the IBM Fusion HCI appliance and your network switches.
- Service node connection directly to their network switches
Network differences between the storage options
There are two type of storage options available with IBM Fusion HCI System, namely Global Data Platform and Fusion Data Foundation.
You must be aware of the network differences between the Global Data Platform and the Fusion Data Foundation for better planning.
For additional details of Fusion Data Foundation network architecture, see Multiple racks. For high-availability multi-rack and expansion racks with storage type FDF prerequisites (this is also applicable to multi AZ racks), see Network for multi-rack HA.
Network for service node
- Network overview for service node
- The node supports three types of networks, namely provisioning network, Bare Metal network, and
Customer DC network. Both provisioning network and Bare Metal network are mandatory and the Customer
DC network is optional.
To connect the DC network directly to the service node, connect RJ45 network cable to service node slot 4 port 4. The Port 4 of the quad-port OpenShift Container Platform 1Gb NIC is used to connect the Service Node to the client network. IA Cat5e (or better) Ethernet cable with RJ45 connectors is needed to make the connection from the Service Node to the client switch. IBM does not provide this cable so you must arrange cat5e cable that is commonly available in any data center.
- Bare Metal network
Bare Metal node has two 100G ports (with Data Foundation rack), which internally connects to the high-speed switch of the rack. This network is used to reach the OpenShift cluster on the rack. This network is configured by the IBM Fusion HCI System install process.
- Provisioning network
This has two ports 1G Nic to connect to the provisioning network of the OpenShift. The device comes pre-wired and pre-configured right from the factory.
- Customer DC network
This network is used to access service node directly for maintenance and out of band management purposes. It is recommended that you provide an IP, gateway, subnetmask to the SSR at the time of initial setup. Also, check whether you filled in network planing sheet with the right details.
It has 1 GbE Nic that is used to connect to the customer data centre for out of band access.
- Bare Metal network
- Prerequisites
-
Ensure you meet the following prerequisites for the service node:
- A separate VLAN must be available for the Customer DC connectivity to the service node.
- Make sure to provide the DHCP-assigned IP address of the Bare Metal network to connect the
service node.
Note: The DHCP and DNS lookup and reverse lookup records are required for service node similar to other OpenShift nodes.
- Ensure that firewall port 22 is allocated for SSH access.
The port includes 443, 3000, 3900 for stage2 user interface. The 443 and 3900 are allocated for IMM console access through port forwarding on service node if needed.
Network for multi-rack HA
- The provisioning network (VLAN 4091) and OpenShift Bare Metal network must be extended between the racks and must be on the same layer2 domain across all three racks or sites.
- The recommended uplink bandwidth requirement is 25G and higher.
- The latency must be in the range of less than 10 ms RTT. For more information, see Guidance for OCP Deployments Spanning Multiple Sites.
- For Data Foundation multi-rack setup, ensure you meet
the following mandatory requirements from lab switches setup:
- The provisioner VLAN 4091 must be extended on all the intermediate switches between the racks.
- The OpenShift Container Platform VLAN must be extended on all the intermediate switches between the racks.
Network planning and prerequisites for remote support connection
Hostname / GEO | IP | Ports |
---|---|---|
aos.us.ihost.com | 72.15.208.234 | 443 |
Americas Broker (4.0 Sessions) | 150.238.213.135 | 443 |
Americas Broker (4.0 Sessions) | 72.15.223.60 | 443 |
Americas Broker (4.0 Sessions) | 72.15.223.62 | 443 |
To a remote support connection to work, the customer must allow encrypted TLS outbound traffic
for the server 72.15.223.60
on port 443, 150.238.213.135
on port
443 or 72.15.223.62
on port 443.
- 72.15.208.234 aos.us.ihost.com (hosted in North America, best for most geographies)
- 150.238.213.135 aosback.us.ihost.com (hosted in North America, best for most geographies)
- 72.15.223.60 aosrelay1.us.ihost.com (hosted in North America, best for most geographies)
- 72.15.223.62 aoshats.us.ihost.com (hosted in North America, best for most geographies)
Remote support connection automatically chooses the broker which provides the best end-to-end performance. All broker servers are available from all geographies, with performance typically better from the server closest to the customer system.
For more information about remote support, see Remote support.
Configure DNS and DHCP
Domain Name System (DNS) and Dynamic Host Configuration Protocol (DHCP) must be configured so that each node in the appliance has a hostname and IP address. Each node comes with a pre-configured MAC address, and also a MAC address for the bootstrap VM that provides a temporary control plane to orchestrate the installation of OpenShift cluster. IBM provides a list of all MAC addresses so that your server or network teams can configure DNS and DHCP entries for each OpenShift node, service node, and bootstrap VM. Create a DHCP entry for each MAC address, and then create forward and reverse DNS entries for each subsequent IP address.
For a full list of steps about setting your DHCP and DNS, see Setting up the DNS and DHCP for IBM Fusion HCI System.
Ensure that your DHCP server can provide infinite leases. Your DHCP server must provide a DHCP
expiration time of 4294967295 seconds to properly set an infinite lease as specified by rfc2131.
If a lesser value is returned for the DHCP infinite lease time, the node reports an error and a
permanent IP is not set for the node. In RHEL 8, dhcpd
does not provide infinite
leases.
If you want your DHCP server to serve dynamic IP addresses with infinite lease times, use
dnsmasq
rather than dhcpd
.
Configure NTP server
Ensure that the network administrator configured an NTP server that is connected to your network.
IBM Fusion HCI System requires a connection to an NTP server so that time can be coordinated across the nodes in the OpenShift cluster.
Ensure that the NTP server is accessible on the network where the IBM Fusion HCI System is connected. Provide the IP address of the NTP server in the planning worksheet. It is needed by the IBM SSR to complete the initial configuration of the appliance.
You can provide multiple NTP servers as comma separated values.
MTU requirement for Backup & Restore Huband Spoke configuration
- The same MTU (maximum transmission unit) values are set on both clusters.
- Confirm with network admins that planned MTU value is supported by your data center infrastructure as well.
- All the devices and routers between these two clusters are configured with the same MTU.
Planning a connection between IBM Fusion HCI and your network
The IBM Fusion HCI System appliance contains two high-speed switches that are used to connect to your data center network. The storage and compute nodes that make up the appliance are connected to the data center network via the high-speed switches in the appliance. The nodes are not connected directly to the data center switches. As such, the connection between the appliance and the data center network is a switch-to-switch connection, not a node-to-switch connection. The appliance switches must be treated as leaf switches within your network, and so it is recommended to connect them to core switches.
The IBM Fusion HCI System switches should be connected to two data center switches for redundancy. Those switches must be configured to look like one logical switch via VPC (or equivalent stacking technology). The stacking must be used to allow redundancy, upgradability, and higher bandwidth. The IBM Fusion HCI System high speed switches use MLAG technology to stack the switches, meaning they appear as a single logical switch to your network.Link Aggregation Control Protocol (LACP) is an IEEE standard that is defined in IEEE 802.3ad to dynamically build an Etherchannel.
Here, ISL refers to Inter Switch Link.
- Rate setting is Fast
- Mode is Active-Active setting
To know more about network cable requirements to connect the appliance switches to the data center switches, see Network cable and transceiver options.
LACP topology
Two ports (31 and 32) on each of the IBM Fusion HCI System high-speed switches are used to connect to the data center switches. Link aggregation is used to group multiple links into a single port channel with a bandwidth of all of the individual links combined. This configuration also provides redundancy and high availability, meaning that if any of the links or switches fail, traffic is automatically balanced between the remaining links.
It is recommended to use both ports 31 and 32 on each high-speed switch to connect to the data center switches as this results in a total of four links. The four links are aggregated into a single port channel with four times the bandwidth.
The recommendation is to have two switches and four ports. If you want one switch, then combine Data center switch 1 and Data center switch 2 with four ports.
VLAN Planning
- OpenShift Customer VLAN – the default VLAN that is used to access OpenShift. You must provide a name and ID for this VLAN.
- Native VLAN – Handles discard traffic. Typically the native VLAN ID is 1, but verify your data center’s default ID.
- Storage VLAN - VLAN used by the internal 100G Storage Network. Default value is 3201, and can be changed during initial configuration.
The name and ID of the OpenShift Customer VLAN must be recorded in the planning worksheet as it is needed by the IBM SSR to complete the initial configuration of the appliance. Also, record the Native VLAN ID and storage VLAN ID if it is not set to their default values of 1 and 3201 respectively.
- VLAN 4091 is used by the internal Provisioning network.
- VLANs 3725-3999 are reserved by the internal switches.
.{0,13}\-v.*
.Planning for Hosted Control Plane
In the same CIDR range (number of clusters), you must have a set of free available IPs (no DHCP or DNS is required). It is based on the number of Hosted Control Plane clusters that you plan to create in the rack.
Planning and prerequisites for remote mount support
- For IBM Storage Scale Erasure Code Edition (ECE)
-
- The storage VLAN (default 3201) must be available and configured on the customer network. If the default storage VLAN 3201 is not available on the customer network, contact IBM Support to change the default VLAN on IBM Fusion HCI System.
- The default gateway from the storage VLAN must be configured and reachable from IBM Fusion HCI System.
- The routing must be in place on the customer network from the storage gateway to the external IBM Elastic Spectrum System
- The IBM Elastic Spectrum System must also be configured to support MTU 9000 along with other devices en route router, switches. The default MTU of IBM Fusion HCI System is 9000.
- For Red Hat® OpenShift Data Foundation
-
- The routing must be in place on the customer network from the storage gateway to the external IBM Elastic Spectrum System
- The IBM Elastic Spectrum System must also be configured to support MTU 9000 along with other devices en route router, switches. The default MTU of IBM Fusion HCI System is 9000.
Provide network configuration for IBM SSR setup
After the IBM Fusion HCI System appliance is shipped to your data center, an IBM Systems Service Representative (SSR) visits on site to setup the appliance. As part of this process, the SSR configures IBM Fusion HCI System to connect to your internal network. To complete this task, the SSR need information about your network.
Fill out the information for IBM SSR in the following Planning Worksheet, and then share the worksheet with your IBM representative:
Network definitions
- Use LACP aggregation
- Whether or not the recommended LACP topology is being used.Note: LACP is the preferred choice. Use the
no aggregation
topology only if LACP is not possible as it does not provide the redundancy most clients require.
- Link name
- The name to assign to the aggregated link created by the recommended LAG topology.
- Lag ID
- The default LAG ID is 250, and you would only need to customize the ID if it is already in use on your network. A scenario where this might occur is if you have multiple IBM Fusion HCI System appliances on the same network.
- Spanning tree enabled
- If you are not using the recommended LAG topology, specify whether spanning tree is enabled on the network or not.
- OpenShift VLAN name
- The name to assign to the OpenShift Customer VLAN.
- OpenShift VLAN ID
- The VLAN ID to assign to the OpenShift Customer VLAN.
- Native VLAN ID
- The VLAN ID to assign to the Native VLAN if you aren’t using the default of 1.
- Storage VLAN ID
- The VLAN ID to assign to the 100G storage network if you aren’t using the default of 3201. You would only normally customize this if you are configuring a Metro Disaster Recovery configuration between two IBM Fusion HCI System clusters.
- Port type
- Specify whether you are using VLAN trunking or access ports.
- Transceiver
- Specify the type of cable that is being used to connect the HCI high-speed switches with your data center switches. See Hardware Compatibility List.
- NTP server
- The IP address of the NTP server that is used by IBM Fusion HCI System.
Aggregate links
- Links
- Link aggregation is to combine multiple network connections in parallel.
- LACP
- The standard-based negotiation protocol, which is known as IEEE 802.1ax Link Aggregation Control Protocol (LACP), is a way to dynamically build an Etherchannel.