Planning and prerequisites for your Bare Metal Hosted Control Plane
As a prerequisite, gather networking information of the hosting cluster. Define and apply DHCP and DNS information, and configure host inventory settings.
- Choose your configuration.
- Gather networking information about IBM Fusion HCI hosting cluster.
- Define the new networking information.
- Apply the networking information to the DNS and DHCP in your environment
Choose your configuration
- Internal
- In the internal Bare Metal Hosted Control Plane, the servers that are used in the cluster are within the IBM Fusion HCI. When the servers are internal to the IBM Fusion HCI, most of the network information required is collected and present in the Custom Resources of the IBM Fusion HCI.
- External
- In external Bare Metal
Hosted Control Plane, the servers used in the cluster are
external to the IBM Fusion HCI. For external servers,
you must gather the network information from your lab network administrator.
Connecting your external servers to IBM Fusion HCI:
External servers must be connected to the IBM Fusion HCI through a switch to allow connectivity. These servers can then be imported into Multi Cluster Engine (MCE) as part of a Host Inventory. For instructions to import and its usage in a Hosted Control Plane, see Deploying Bare Metal clusters with Fusion Data Foundation.
Networking prerequisites
- Step 1: Gather network information
-
The networking information consists of the entries that the network administrator needs to add to the network. You need the following information about the host cluster:
Keep the following data ready to use when you create a cluster and
nmstate
config for each node.Cluster wide information:Note: The cluster name, sub domain, and domain are all for one hosted cluster. Create them for each of your hosted clusters.Details Description Example values Base Domain Cluster base domain you want to use for Hosted Control Plane cluster to be created mydomain.com
Cluster name Cluster name you want to use for HCP cluster to be created. fusion-bm
Sub Domain This is always <cluster name>.<base domain> fusion-bm.mydomain.com
Individual nodes in the cluster:Entry Description Example values BMC address This is value of field named ipv6ULA or ipv4 in the kickstart CM. You can use ipv6ULA or ipv4.
Example for ipv6ULA - fd9c:316d:179e:c0de:a94:efff:fefe:2ec1 Example value for ipv4 - 170.254.2.9
Bare metal primary interface IP Unused IP that you plan yo use for Bare Metal host in Hosted Control Plane cluster 172.17.x.y Macaddress It is available in the ComputeConfiguration
networkInterfaces
, the bond0 macAddress. You need both Bare Metal Primary and secondary interface Mac addresses. You can find the macaddress in the slot section that includes the first macaddress.This guidance is for internal HCI managed nodes. Consult your network administrator for external bare metal nodes
Gateway IP Run the following command to get this from one of HCI nodes:
If the Hosted Control Plane nodes use the same gateway server as the base HCI cluster. Otherwise, get it from the network administrator.ip r |grep default|grep br-ex|awk '{print $3}'`
It is only needed when you use the static IP assignment for nodes in Hosted Control Plane cluster. For DHCP managed nodes, this field is not needed.
172.17.x.1 MTU This can be 9000 or 1500 based on base HCI rack Bare metal MTU used during installation. Find this value from field mtuCount
in secretuserconfig-secret
in namespaceibm-spectrum-fusion-ns
.9000 Bare metal Primary interface’s first port’s Mac address This is the value of primary macaddress in kickstart config map for a given node where the interfaceType is baremetal. 08:c0:eb:d4:16:4e Bare metal Primary interface’s second port’s Mac address This is the value of secondary macaddress in kickstart config map for given node where interfaceType is baremetal. 08:c0:eb:d4:16:4f Bare metal Primary interface’s Mac address This is value of field networkInterfaces
-> macAddress in kickstart config map for a given node where theinterfaceType
is baremetal.08:c0:eb:d4:16:4e Bare metal Primary interface’s (bond0) first port/interface name This is value of field networkInterfaces -> interfaceLeg1 in kickstart config map for a given node, where interfaceType is baremetal. ens3f0np0 Bare metal Primary interface’s (bond0) second port/interface name This is value of field networkInterfaces -> interfaceLeg2 in kickstart config map for given node, where interfaceType is baremetal. Example servers external to IBM Fusion HCI:Server BMC address of external Bare Metal node HCP Cluster name BMC address Switch bond0 info mac Address Type sr650immru22
tc11-m04-ru22.mydomain.comfd8c:215d:178e:c0de:3a68:ddff:fe57:2e95 extbarehcp01m04 1.23.45.178 ens1f0np0 b8:3f:d2:3c:6d:60
ens1f1np1
b8:3f:d2:3c:6d:61b8:3f:d2:3c:6d:60 External Example servers internal to IBM Fusion HCI:Server macaddress Example HCP Cluster name BMC address BMC address Type/ Label compute-1-ru8 1070FDB8DF72 fusion-bm 1.23.45.163 fd8c:215d:178e:c0de:a94:efff:fefd:e7e1 Compute DHCP update for the cluster:
Note: This section is needed only when you plan to use DHCP to manage IP and hostname assignment for nodes in Hosted Control Plane. If you use static IP assignment, then skip this section.You need DHCP only if you want to manage your server IPs using the DHCP for hosted clusters.To ensure that the individual servers have an IP address that can be contacted by the Hosted Control Plane cluster, make an entry for each host that links its macaddress to an IP address that can be contacted by the IBM Fusion HCI cluster. The macaddress can be found on the back of each physical server.
Example:Server macaddress IP address compute-1-ru8.fusion-bm.mydomain.com 10:70:FD:B8:DF:72 172.17.x.y DNS update for Cluster:
Note: Add DNS entry of every host user who wants to create Bare Metal Hosted Control Plane cluster.The DNS update table entries for each Hosted Control Plane cluster that links a lab provided Server IP address to the*.apps.NameofCluster.FQDN
. This IP address is used to create a load balancer on the Hosted Control Plane cluster to allow ingress to that cluster.Load balancer IP Entry for DNS FQDN Host Ingress Type HCP Clustername Domain name 1.23.45.910 *.apps.barehcp01m04.mydomain.com compute-1-ru8.fusion-bm.mydomain.com loadbalancer fusion-bm mydomain.com Server name Entry for DNS IP address compute-1-ru8 compute-1-ru8.fusion-bm.mydomain.com 172.17.x.y - Step 2: Apply the networking information to the DNS and DHCP in your environment
-
After you gather hosting cluster information, apply the following in your environment (lab network environment):
- DHCP update
- DNS update for host
Multi Cluster Engine (MCE)
- Step 1: Configure Host Inventory Settings
-
After the initial setup, configure the host inventory settings. For more information about the host inventory settings, see Red Hat documentation.
- Step 2: Create project in host IBM Fusion OpenShift Container Platform cluster for host inventory for Hosted Control Plane cluster to be created
- From the OpenShift Container Platform console of IBM Fusion, create a project designated for the host
inventory along with other necessary resources to support the Hosted Control Plane cluster.
You can use any name. In this example, cluster name is used for simple identification.
- Step 3: Create your infrastructure environment
-
The next step is to create an infrastructure environment in MCE or using the Hosted Control Plane CLI. This infrastructure environment contains the list of Bare Metal hosts that you can select to create a Bare Metal Hosted Control Plane.Note: Create one infrastructure environment per hosted control plane cluster. Each Hosted Control Plane can only add hosts from a single infrastructure.
- In the MCE user interface, go to .
- Select Create infrastructure environment.
- On the Create infrastructure environment page, enter the following details:
- Name
- Name of the infrastructure. No specific name is required.
- Network type
- Select Static IP, Bridges, and Bonds.
- Location
- Label on all hosts
- Labels
- Optional value.
- Pull Secret
- The pull secret must include the following:
- cloud.openshift.com
- cp.icr.io
- quay.io
- registry.connect.redhat.com
- registry.redhat.io
- SSH public key
- Generate a private public ssh key pair from bastion host and specify public key in ssh field. It is optional but recommended.
- Step 4: Import the hosts by using MCE
- With the infrastructure environment created, the next steps are to boot individual servers with an ISO to import them into the host inventory.
- Step 4.1: Create the NMStateConfig for each host
- The
NMStateConfig
allows the server to be recognized by the IBM Fusion HCI cluster. For additional information, see Red Hat documentation.The NMStateConfig varies based on whether static IP assignment or DHCP IP assignment is used. For a fresh IBM Fusion HCI 2.10 installation or an upgrade from 2.9 to 2.10 with the network migration script applied, use the new NMStateConfig. If IBM Fusion HCI is upgraded from version 2.9 to 2.10 without applying the network migration script, retain the old NMStateConfig configuration.- Static IP
- New
NMStateConfig
for static IP assignment:apiVersion: agent-install.openshift.io/v1beta1 kind: NMStateConfig metadata: labels: infraenvs.agent-install.openshift.io: <same as infra name> name: <any unique name for host within infra for example fusion-bm-ru8> namespace: <same namespace as used for infra> spec: config: dns-resolver: config: server: - <DNS server IP> interfaces: - ipv6: enabled: false link-aggregation: mode: 802.3ad options: lacp_rate: '1' miimon: '140' xmit_hash_policy: '1' ports: - <Bare metal Primary interface’s first port/interface name> - <Bare metal Primary interface’s second port/interface name> mac-address: <Bare metal Primary interface’s first port’s Mac address> mtu: <MTU value> name: bond0 state: up type: bond - ipv4: address: - ip: <Bare metal primary interface IP> prefix-length: 24 dhcp: false enabled: true ipv6: enabled: false name: <bond name followed by VLAN ID of type OpenShift Customer VLAN present on the cluster for example bond0.VLAN_ID> state: up type: vlan vlan: base-iface: bond0 id: <VLAN ID of type OpenShift Customer VLAN present on the cluster> routes: config: - destination: 0.0.0.0/0 next-hop-address: <Gateway IP> next-hop-interface: <bond name followed by VLAN ID of type OpenShift Customer VLAN present on the cluster for example bond0.VLAN_ID> interfaces: - macAddress: <Bare metal Primary interface’s first port’s Mac address> name: Bare metal Primary interface’s first port/interface name> - macAddress: <Bare metal Primary interface’s second port’s Mac address> name: Bare metal Primary interface’s second port/interface name>
Old
NMStateConfig
for static IP Assignment:apiVersion: agent-install.openshift.io/v1beta1 kind: NMStateConfig metadata: labels: infraenvs.agent-install.openshift.io: <same as infra name> name: <any unique name for host within infra for example fusion-bm-ru8> namespace: <same namespace as used for infra> spec: config: dns-resolver: config: server: - <DNS server IP> interfaces: - ipv4: address: - ip: <Bare metal primary interface IP> prefix-length: 24 dhcp: false enabled: true ipv6: enabled: false link-aggregation: mode: 802.3ad options: lacp_rate: '1' miimon: '140' xmit_hash_policy: '1' ports: - <Bare metal Primary interface’s first port/interface name> - <Bare metal Primary interface’s second port/interface name> mac-address: <Bare metal Primary interface’s first port’s Mac address> mtu: <MTU value> name: bond0 state: up type: bond routes: config: - destination: 0.0.0.0/0 next-hop-address: <Gateway IP> next-hop-interface: bond0 interfaces: - macAddress: <Bare metal Primary interface’s first port’s Mac address> name: <Bare metal Primary interface’s first port/interface name> - macAddress: <Bare metal Primary interface’s second port’s Mac address> name: <Bare metal Primary interface’s second port/interface name>
- DHCP
- New
NMStateConfig
for DHCP IP assignment:apiVersion: agent-install.openshift.io/v1beta1 kind: NMStateConfig metadata: labels: infraenvs.agent-install.openshift.io: <same as infra name> name: <any unique name for host within infra for example fusion-bm-ru8> namespace: <same namespace as used for infra> spec: config: interfaces: - ipv6: enabled: false link-aggregation: mode: 802.3ad options: lacp_rate: "1" miimon: "140" xmit_hash_policy: "1" ports: - <Bare metal Primary interface's first port/interface name> - <Bare metal Primary interface's second port/interface name> mac-address: <Bare metal Primary interface's first port's Mac address> mtu: <MTU value> name: bond0 state: up type: bond - ipv4: dhcp: true enabled: true ipv6: enabled: false name: <bond name followed by VLAN ID of type OpenShift Customer VLAN present on the cluster. For example bond0.VLAN_ID> state: up type: vlan vlan: base-iface: bond0 id: <VLAN ID of type OpenShift Customer VLAN present on the cluster> interfaces: - macAddress: <Bare metal Primary interface's first port's Mac address> name: <Bare metal Primary interface's first port/interface name> - macAddress: <Bare metal Primary interface's second port's Mac address> name: <Bare metal Primary interface's second port/interface name>
Old NMStateConfig:apiVersion: agent-install.openshift.io/v1beta1 kind: NMStateConfig metadata: labels: infraenvs.agent-install.openshift.io: <same as infra name> name: <any unique name for host within infra for example fusion-bm-ru8> namespace: <same namespace as used for infra> spec: config: interfaces: - ipv4: dhcp: true enabled: true ipv6: enabled: false link-aggregation: mode: 802.3ad options: lacp_rate: "1" miimon: "140" xmit_hash_policy: "1" ports: - <Bare metal Primary interface's first port/interface name> - <Bare metal Primary interface's second port/interface name> mac-address: <Bare metal Primary interface's first port's Mac address> mtu: <MTU value> name: bond0 state: up type: bond interfaces: - macAddress: <Bare metal Primary interface's first port's Mac address> name: <Bare metal Primary interface's first port/interface name> - macAddress: <Bare metal Primary interface's second port's Mac address> name: <Bare metal Primary interface's second port/interface name>
The
NMStateConfig
is dependent on the network environment and must be created in consultation with your network administrators.Enter the following fields in the NMStateConfig.yaml:Note: These fields within theNMStateConfig
are unique per host.Field Description name Unique name for each node. namespace Namespace is the infraenv namespace. labels infraenvs.agent-install.openshift.io: infraNamespace
. TheinfraNamespace
must match the name of theinfraenvironment
into which the host is imported.ports The correct ports can be found in the ComputeConfiguration
CR. The ComputeConfiguration for the particular node to be imported to MCE. Within the ComputeConfiguration, there is a section “NetworkInterfaces”. Within this section, the interfaceName with “baremetal” has the name of the two interfaces interfaceLeg1 and interfaceLeg2. The values under interfaceLeg1 and interfaceLeg2 are the ports.This guidance is for internal. Consult your network administrator for external.
ipaddress It is the ipv4 address for that node or server. interfaces > port: macaddress: and port: macaddress: You can find these values in the ComputeConfiguration
CR. Each port is an entry from the ports. Themacaddress
is for that port. Themacaddress
can be ComputeConfiguration for the node by looking for thenetworkType port: baremetal
. Map thatmacaddress
to thenetworkCards: -slot-#:
There will be one matchingmacaddress
and one new one, use those in theNMStateConfig.yaml
. Important: change any letters to lower case.Example:
ens1f0np1: macaddress:
macaddress can be found in theComputeConfiguration
for the node undernetworkCards: slot-1: macaddress:
This guidance is for internal. Consult your network administrator for external.
NMState information is sourced from the switch. For unmanaged servers, you must manually provide the required values.
After the NMStateConfig.yaml is created for a server, it must be added to IBM Fusion HCI cluster using the following commands:
- Go to the
infraenvnamespace
namespace.oc project infraenvnamespace
- Apply the YAML.
oc apply -f nmstate.yaml
- Step 4.2: Download the discovery.iso
-
Use the wget command to download the
discovery.iso
to the service node or another jumpbox.Note: Keep this ISO secure as you can use this to add a server to the cluster with sensitive information.To get the retrieval command do the following:
- Log into the hub IBM Fusion HCI cluster.
- In the MCE section, go to .
- In the infrastructure environment, select the infrastructure you intend to import the host into and select that infrastructure environment.
- Select wget command. Use the command or URL to transfer the ISO to your jumpbox. . There exists either a URL or a
- Step 4.3: Boot the host with the discovery.iso
-
This task requires you to log into the IMM, mount the ISO image, and reboot the server.Note: For internal, go through the steps in this section. For external, go through the Red Hat documentation.Mount the ISO on to the server through the IMM:
- Get the IPv6 or IPv4 from ComputeConfiguration for that server.
- Log in to the IMM.
- Use the USER/Password from
secretName
secret in theComputeConfiguration
. ThedefaultUserName
anddefaultUserPasswrd
fields. - Select Remote Console tab.
- In the Remote Console tab, select Launch remote console and Media.
- In the Media page, go to the Mount media from Client Browser section.
- Select the discovery.iso image and mount it.
- Select one virtual media to boot on the next restart.
- Select the mounted discovery.iso and restart immediately.
- Close windows and monitor that the remote console is showing the reboot.
- Step 5: Accept the host into the MCE Inventory
- After the server gets restarted with discovery.iso, accept the host into
the host inventory.
- In the MCE user interface, go to .
- From the table, select the infrastructure environment where the host got imported. Note: In the hosts table, you can see the approve host after the server gets booted up completely. It can take 10-15 minutes. Verify the server before you accept. If the server name is not recognizable, then look at the details to confirm. You can change the host name but ensure it is unique.
- Accept the host into the inventory.
- Step 6: Labeling hosts
- After you accept the hosts, add labels to them through the Agent CR. It can be useful to
identify a specific cluster or nodepool. To add the label, do the following steps:
- Log into the OpenShift console.
- Go to .
- Search for Agent of type agent-install-openshift.io.
- In the instances, find the agent to modify the label.
- Edit the label of the agent to add a new label.