Planning and prerequisites for your Bare Metal Hosted Control Plane
As a prerequisite, gather networking information of the hosting cluster. Define and apply DHCP and DNS information, and configure host inventory settings.
- Choose your configuration.
- Gather networking information about IBM Fusion HCI System hosting cluster.
- Define the new networking information.
- Apply the networking information to the DNS and DHCP in your environment
Choose your configuration
- Internal
- In the internal Bare Metal Hosted Control Plane, the servers that are used in the cluster are within the IBM Fusion HCI System. When the servers are internal to the IBM Fusion HCI System, much of the network information required is collected and present in the Custom Resources of the IBM Fusion HCI System.
- External
- In external Bare Metal
Hosted Control Plane, the servers used in the cluster are
external to the IBM Fusion HCI System. For external servers,
you must gather the network information from your lab network administrator.
Connecting your external servers to IBM Fusion HCI System:
External servers must be connected to the IBM Fusion HCI System through a switch to allow connectivity. These servers can then be imported into Red Hat® Advanced Cluster Management for Kubernetes as part of a Host Inventory. For instructions to import and its usage in a Hosted Control Plane, see Deploying Bare Metal clusters with Fusion Data Foundation.
Networking prerequisites
- Step 1: Gather network information
-
The networking information consists of the entries that the network administrator needs to add to the network. You need the following information about the host cluster:
Entry Description Example values IP address for the API of management cluster Run the following ping command to get the API server address for the IBM Fusion HCI System OpenShift Container Platform cluster: ping api.nameofcluster.domain
1.23.45.101 Host Information Server Discovered node name in the Nodes page in the IBM Fusion HCI System user interface. This guidance is for internal. Consult your network administrator for external.
Macaddress It is available in the ComputeConfiguration
networkInterfaces
, the bond0 macAddress. You need both macaddresses associated with bond0. You can find the macaddress in the slot section that includes the first macaddress.This guidance is for internal. Consult your network administrator for external.
Hosted Control Plane Cluster Name Name of the cluster that you want to use IPv4 Address IP address that is used for that node. It is assigned by your network team. IPv6Address IP address in ComputeConfiguration
CR underspec
This might not be applicable for all nodes as factory nodes do not have compute config CR.
This guidance is for internal. Consult your network administrator for external.
Servers external to IBM Fusion HCI System:Server IPv6 HCP Cluster name IPv4 address Bare Metal interface name and associated Mac Type sr650immru22
tc11-m04-ru22.mydomain.comfd8c:215d:178e:c0de:3a68:ddff:fe57:2e95 extbarehcp01m04 1.23.45.178 ens1f0np0 b8:3f:d2:3c:6d:60
ens1f1np1
b8:3f:d2:3c:6d:61External Servers internal to IBM Fusion HCI System:Server macaddress Example HCP Cluster name IPv4 Address IPv6 Address Type/ Label RU10 1070FDB8DF72 barehcp02m04 1.23.45.163 fd8c:215d:178e:c0de:a94:efff:fefd:e7e1 Compute DHCP update for the cluster:
To ensure that the individual servers have an IP address that can be contacted by the Hosted Control Plane cluster, make an entry for each host that links its macaddress to an IP address that can be contacted by the IBM Fusion HCI System cluster. The macaddress can be found on the back of each physical server.
Example:Server macaddress IP address compute-1-ru8.rackm04.my domain.com 10:70:FD:B8:DF:72 1.23.45.678 DNS update for Cluster:
The DNS update table entries for each Hosted Control Plane cluster that links a lab provided Server IP address to the*.apps.NameofCluster.FQDN
. This IP address is used to create a load balancer on the Hosted Control Plane cluster to allow ingress to that cluster.Load balancer IP Entry for DNS FQDN Host Ingress Type HCP Clustername Domain name 1.23.45.910 *.apps.barehcp01m04.mydomain.com tc11-m04-barehcp01.mydomain.com loadbalancer barehcp01m04 mydomain.com DNS updates for api and api-int:
The api and api-int for each Hosted Control Plane cluster needs an alias in the DNS table that points to the IBM Fusion HCI System cluster’s api address.
To validate the IP address, use the ping command. See Ping IP address.HCI Server IP Entry for DNS alias FQDN 1.23.45.101 api.rackm04.barehcp01m04.mydomain.com api-rackm04.mydomain.com 1.23.45.101 api-int.rackm04.barehcp01m04.mydomain.com api-rackm04.mydomain.com - Step 2: Apply the networking information to the DNS and DHCP in your environment
-
After you gather hosting cluster information, apply the following in your environment (lab network environment):
- DHCP update
- DNS update for host
- DNS updates for api and api-int
Red Hat Advanced Cluster Management for Kubernetes (ACM)
- Step 1: Configure Host Inventory Settings
-
After the initial setup, configure the host inventory settings. For more information about the host inventory settings, see Red Hat documentation.
- Step 2: Create your infrastructure environment
-
The next step is to create an infrastructure environment in Red Hat Advanced Cluster Management for Kubernetes (ACM) or using the Hosted Control Plane CLI. This infrastructure environment contains the list of Bare Metal hosts that you can select to create a Bare Metal Hosted Control Plane.Important: Create one infrastructure environment per hosted control plane cluster. Each Hosted Control Plane can only add hosts from a single infrastructure.
- In the ACM user interface, go to .
- Select Create infrastructure environment.
- On the Create infrastructure environment page, enter the following details:
- Name
- Name of the infrastructure. No specific name is required.
- Network type
- Select Static IP, Bridges, and Bonds.
- Location
- Label on all hosts
- Labels
- Optional value.
- Pull Secret
- The pull secret must include the following:
- cloud.openshift.com
- cp.icr.io
- quay.io
- registry.connect.redhat.com
- registry.redhat.io
- SSH public key
- It is the key used by the IBM Fusion HCI System.
- Step 3: Import the hosts by using ACM
- With the infrastructure environment created, the next steps are to boot individual servers with an ISO to import them into the host inventory.
- Step 3.1: Create the NMStateConfig for each host
- The NMStateConfig allows the server to be recognized by the IBM Fusion HCI System cluster. For additional information, see
Red Hat documentation.Note: What goes into
NMStateConfig
for internal is available through the CRs. For external, consult your network administrator.Example:apiVersion: agent-install.openshift.io/v1beta1 kind: NMStateConfig metadata: name: fusion-bm-ru9 namespace: fusion-bm labels: infraenvs.agent-install.openshift.io: fusion-bm spec: interfaces: - name: ens3f0np0 macAddress: 08:c0:eb:d4:14:fa state: up type: ethernet ipv4: enabled: false ipv6: enabled: false - name: ens3f1np1 macAddress: 08:c0:eb:d4:14:fb state: up type: ethernet ipv4: enabled: false ipv6: enabled: false config: interfaces: - name: bond0 type: bond mtu: 9000 state: up mac-address: 08:c0:eb:d4:14:fa ipv4: address: - ip: 17.0.1.42 prefix-length: 24 enabled: true ipv6: enabled: false link-aggregation: mode: 802.3ad options: lacp_rate: "1" miimon: "140" xmit_hash_policy: "1" ports: - ens3f0np0 - ens3f1np1 dns-resolver: config: server: - 17.0.1.250 routes: config: - destination: 0.0.0.0/0 next-hop-address: 17.0.1.1 next-hop-interface: bond0
The
NMStateConfig
is dependent on the network environment and must be created in consultation with your network administrators.Fill the fields in the NMStateConfig.yaml based on the following guidance. These fields within theNMStateConfig
are unique per host:- name
- Unique name for each node
- namespace
- Namespace is the infraenv namespace.
- labels
infraenvs.agent-install.openshift.io: infraNamespace
. TheinfraNamespace
must match the name of theinfraenvironment
into which the host is imported.
- ports
- The correct ports can be found in the
ComputeConfiguration
CR. The ComputeConfiguration for the particular node to be imported to ACM. Within the ComputeConfiguration, there is a section “NetworkInterfaces”. Within this section, the interfaceName with “bond0” has the name of the two interfaces interfaceLeg1 and interfaceLeg2. The values under interfaceLeg1 and interfaceLeg2 are the ports.This guidance is for internal. Consult your network administrator for external.
- ipaddress
- It is the ipv4 address for that node or server.
- interfaces > port: macaddress: and port: macaddress:
- You can find these values in the
ComputeConfiguration
CR. Each port is an entry from the ports. The macaddress is for that port. The macaddress can be ComputeConfiguration for the node by looking for the bond0 port: macaddress. Map that macaddress to the networkCards: -slot-#: There will be one matching macaddress and one new one, use those in the NMStateConfig.yaml. Important: change any letters to lower case.Example:
ens1f0np1: macaddress: macaddress can be found in the ComputeConfiguration for the node under networkCards: slot-1: macaddress:
This guidance is for internal. Consult your network administrator for external.
NMstate information is from the switch and for unmanaged servers, you need to provide the values:
After the NMStateConfig.yaml is created for a server. It must be added to IBM Fusion HCI System cluster with the following commands:
- Go to the
infraenvnamespace
namespace.oc project infraenvnamespace
- Apply the YAML.
oc apply -f nmstate.yaml
- Step 3.2: Download the discovery.iso
-
Use the wget command to download the
discovery.iso
to the service node or another jumpbox.Note: Keep this ISO secure as you can use this to add a server to the cluster with sensitive information.To get the retrieval command do the following:
- Log into the hub IBM Fusion HCI System cluster.
- In the ACM section, go to .
- In the infrastructure environment, select the infrastructure you intend to import the host into and select that infrastructure environment.
- Select wget command. Use the command or URL to transfer the ISO to your jumpbox. . There exists either a URL or a
- Step 3.4: Boot the host with the discovery.iso
-
This task requires you to log into the IMM, mount the ISO image, and reboot the server.Note: For internal, go through the steps in this section. For external, go through the Red Hat documentation.Mount the ISO on to the server through the IMM:
- Get the IPv6 from ComputeConfiguration for that server.
- Log in to the IMM.
- Use the USER/Password from
secretName
secret in theComputeConfiguration
. ThedefaultUserName
anddefaultUserPasswrd
fields. - Select Remote Console tab.
- In the Remote Console tab, select Launch remote console and Media.
- In the Media page, go to the Mount media from Client Browser section.
- Select the discovery.iso image and mount it.
- Select one virtual media to boot on the next restart.
- Select the mounted discovery.iso and restart immediately.
- Close windows and monitor that the remote console is showing the reboot.
- Step 4: Accept the host into the ACM Inventory
- After the server gets restarted with discovery.iso, accept the host into
the host inventory.
- In the ACM user interface, go to .
- From the table, select the infrastructure environment where the host got imported. Note: In the hosts table, you can see the approve host after the server gets booted up completely. It can take 10-15 minutes. Verify the server before you accept. If the server name is not recognizable, then look at the details to confirm. You can change the host name but ensure it is unique.
- Accept the host into the inventory.
- Step 5: Labeling hosts
- After you accept the hosts, add labels to them through the Agent CR. It can be useful to
identify a specific cluster or nodepool. To add the label, do the following steps:
- Log into the OpenShift console.
- Go to .
- Search for Agent of type agent-install-openshift.io.
- In the instances, find the agent to modify the label.
- Edit the label of the agent to add a new label.