Use this procedure to install IBM Fusion HCI
stage 2.
Before you begin
- IBM Fusion HCI requires Red Hat® OpenShift® Container Platform. If you purchased OpenShift subscriptions from IBM, see Activating Red Hat OpenShift Container Platform subscriptions purchased from IBM.
For the supported version of Red Hat OpenShift Container Platform, see
Support
matrix. For other compatible OpenShift distributions, see Red Hat OpenShift distributions.
- Before the installation of IBM Fusion HCI, enable IBM Fusion HCI Software to be downloaded. For steps to
enable, see Activating IBM Fusion HCI Software to be downloaded.
- Go through the installation prerequisites and ensure that they are all met
before you go ahead with your installation. See Installation prerequisites.
- For the installation sequence in a disaster recovery setup, see Installing IBM Fusion HCI in a disaster recovery setup.
- If you would like to enable remote support so that the IBM Support team can assist with the
installation, follow these steps:
- In the title bar, click .
- Enter the contact information, optional proxy server details, and enable the remote support
option. After you enable the remote support, you can start or stop remote sessions. For the actual
procedure, see Enabling remote
support.
If you have proxy connection, you can use this remote support feature for offline installation
also.
- The rack discovery fails for HA MR with static IP. For more information about
the workaround steps, see Workaround for HA MR static
IP.
About this task
IBM Fusion HCI comes with the bootstrapping
software that is installed from the factory. The IBM support representative completes the initial
verification and physically connects the system to network and power. Then, they conduct the network
setup of the appliance, which connects the appliance to the data center network. This procedure
configures all the default nodes (three controllers and three compute nodes). If you ordered more
nodes, then they get installed as well. For high available multi rack, the IBM support
representative completes the initial verification and physically connects the three racks to network
and power. Then, they conduct the network setup of the first two racks which are called auxiliary
racks. Network setup of the third, also known as last rack, is done after the first two racks. The
network setup validates hardware and wiring of rack and connects the rack to the data centre
network. This procedure configures all the default nodes. During network setup of last rack,
software collects network configuration details from first two racks automatically and shows
consolidated appliance view. After the network setup is completed for all three racks, IBM SSR
provides stage 2 URL so you can continue next phase of install that is explained in this procedure.
Procedure
-
Go to the URL provided by your IBM Systems Support Representative (SSR).
The format
of the URL is
https://<host IP address>:3000/isfsetup
.
Note: If you close your
browser anytime before the completion of stage 2, then open the URL again to go back to its last
state.
Important: Do not add, change, or remove any files at any time.
- In Login, enter the Username and
Password and click Log in.
The default
credentials are Username
service-node-client and Password
passw0rd.
- In the License agreement window, select I accept
the license agreement.
- In Create login, enter a new Username and
Password for the service node.
In the Create login page, you are prompted to create a different username
and password. You need to create new credentials because the default credentials are deleted as soon
as the stage 2 installation begins. The password must be at least 8 characters long and include
uppercase letters, lowercase letters, numbers, and special characters. After you change the
password, use the new username for subsequent logins. If you enter an incorrect password for the
same username more than six times within one minute, access to that username is locked out for the
next 10 minutes.
- Click Next.
Configure network:
The Network
precheck page runs an automatic network check against all nodes in the appliance. It
checks whether each node has an assigned IP address and hostname. If it is high-availability
multi-rack, then you can see the nodes of all the three racks.
- Check whether you can see all your nodes in Connected state.
In a high-availability cluster, it displays all the nodes present in all three racks. If
you observe all your nodes in connected state, go to step
7. Any
nodes that do not pass the network check shows in
Disconnected status.
- If one or more nodes are in a Disconnected state, do the
following steps:
- Work with your network team and confirm that the DHCP or Static and DNS are configured
for all nodes in the appliance.
- After you make all the changes, click Retry pre-check
connection to initiate a new network check.
- Click Next.
- In the OpenShift version page, select the version of the
OpenShift and click
Next.
Important: The 4.16.x and 4.18.x versions are available in 2.10.0 release. If you
configured your rack with Data Foundation storage, then
only 4.18.x is available. With Global Data Platform storage,
you can see both the options.
- Set up your image registry:
If you want to use your private image registry, you can install both
Red Hat
OpenShift and
IBM Fusion HCI software from images, which are maintained
in a container registry that you manage. If you do not want to use your enterprise registry, then
skip this step to install
Red Hat OpenShift Container Platform and
IBM Fusion software by using the images that are hosted
in the Red Hat and IBM entitled registries.
Important: Support is available for
self signed, custom certificates, and certificates signed by a
well known CA authority. Also, the self-signed and ingress certificate must be
RSA certificates.
- Choose whether to use the Public image
registry or Private image registry
option.
Choice |
Procedure |
Public image registry |
To use the public image registry, you need a pull secret and an entitlement key.
- Enter the OpenShift Pull secret. It is an
authorization token that stores Docker credentials that you can use to access a registry. Your
cluster needs this secret to access and pull OpenShift images from the
quay.io container registry. If you do not have a pull secret, click
Get Pull secret. It takes you to https://cloud.redhat.com/openshift/install/pull-secret.
- Enter the IBM entitlement key. It is a product code
that is used to pull images from IBM Entitlement Registry. Your cluster needs this key to gain
access to IBM Fusion images in the IBM Entitlement
Registry. If you do not have a key, click Get Entitlement
key. It takes you to IBM
Container Library. For steps to obtain the key, see the Activating IBM Fusion HCI Software to be downloaded.
|
Private image registry |
If you select the Private image registry, then first mirror the Red Hat and IBM Fusion images to your private registry. For more
information about mirroring, see End-to-end mirroring images of IBM Fusion HCI and its services to the enterprise registry.You can choose to host the
Red Hat and IBM Fusion images in separate repositories, or use the
same repository.
|
- Select Connect through a proxy option in Proxy
Settings to provide proxy server details to access to your network and image
registries.
Enter the IP and port for the proxy server in the Host
address and Port fields. If your proxy requires authentication,
then enter a Username and Password.
- Click Next.
If there are any missing images, then an
error message is displayed. Click the View details link to know the details
of the missing images. Correct the missing image error and click Next. This
error can also occur when the IBM Fusion or the OpenShift server is temporarily down.
- Configure network - OpenShift network and storage network.
The
Network settings page shows the OpenShift network to setup OpenShift, as well as the storage configuration
for the internal storage network of IBM Fusion.
- Optional: Ingress certificate:
The Ingress certificate page
allows you to optionally configure a custom certificate for OpenShift.
- Click the Configure now toggle to off to configure OpenShift with a self-signed certificate.
By default, the
Configure now toggle is set to on. However, it is
recommended that you upload a certificate that is provided by Certificate Authority (CA). Applying a
custom certificate during the installation ensures that the certificate is used immediately by
OpenShift. If you do not apply custom
certificate during installation, then you can do it later from
OpenShift. For more information about how to
apply custom certificate from
OpenShift, see
Ingress Operator in OpenShift.
Important: Support
is available for self signed, custom certificates, and
certificates signed by a well known CA authority. Also, the self-signed and
ingress certificate must be RSA certificates.
- Drag and drop to upload a .crt file of a size that does not
exceed 1 MB or enter the details as text input.
- Enter the Private key and click Next.
The OpenShift initialization page gets
displayed.
- Click Finish.
- Cluster initialization and next steps.
The final step of
this phase of installation is to create a three node Red Hat
OpenShift cluster and install IBM Fusion software. This minimal cluster is used in the
next phase of the installation to orchestrate building out the cluster and configuring storage for
IBM Fusion HCI.
- Monitor the progress of the OpenShift cluster.
In case of
failures, collect logs to analyze the errors. Click Learn more link to see
details about the error and steps to troubleshoot the issue. After you fix the issue, click
Retry. If you need to change any information entered in previous install
steps, click Change install settings.
After the OpenShift cluster gets successfully created, you
can view the credentials for the OpenShift
cluster.
- In the section, click Download Password and
CoreOS key to download and save the kubeadmin password and CoreOS key.
Note down the credentials before the installation proceeds with the next phase as you
cannot access the
OpenShift cluster without
these credentials. It is also essential for debugging, system recovery, and reimaging or resetting.
In the
Downloading Password and CoreOS key window, confirm that you have saved
the Password and CoreOS key, and click
Continue.
Note: For security reasons,
after you download the password and CoreOS key, all the files that have credentials get deleted,
such as kickstart, inventory. If there is a failure, then a error message is displayed with details.
Fix the error and click Retry link. If retry does not solve the problem, see
Delete files from provisioned node.
A window
displays the progress of the deletion of all the sensitive service node files. After completion, the
window closes automatically and returns to the final page of the installation.
- To retrieve the password from the downloaded file in the future, run the following
commands:
- Go to the Downloads folder:
cd ~/Downloads
- List the files in the folder:
ls -ltr
- Extract the contents of ocpkeys compressed file.
- Go to the auth folder:
cd clusterconfigs/auth
- Open kubeadmin-password in edit mode and copy the password:
vi kubeadmin-password
- Go to the extracted folder /install, save the CoreOS
Key:
id_rsa
- In the installation folder, id_rsa is a CoreOS key that can be used to
connect to CoreOS nodes. The folder also includes isfconfig sub folder where
you can get the configuration data of the rack. Do not make updates to these files and keep it in
safe location for future reference.
- In the Login section, copy the Username
and Password and save it. Use it to log in to IBM Fusion HCI and Red Hat
OpenShift. These credentials are configured
as single sign-on between Red Hat
OpenShift and IBM Fusion.
Note: After you save the password and download the
ocpkey.zip file, the URL
points you to the OCP address. If your URL does not automatically point to the OCP address, then
check the
Network Preparation tab in your TDA installation worksheet to ensure that the
DNS wildcard domain name is added to the DNS server. Test your connectivity with
https://DNSentryingressendpointipaddress.
- Click IBM Fusion to go to the IBM Fusion HCI user interface.
The login page
of the IBM Fusion HCI console displays in a new browser
tab.
- Enter the credentials that you noted down and click Log in to
resume with the installation of other Fusion services.
If you encounter errors in the OpenShift installation
wizard, see Installation issues. If you encounter errors in the
Provisioning and software installation wizard, check the
logs. For more information about accessing these logs, see Collecting log files of final installation.
Important: If there arises a need to redeploy
IBM Fusion HCI stage 2 installation in the
future, contact
Contact IBM
Support.
What to do next
- Set up a non-kubeadmin cluster-admin user and disable the temporary kubeadmin user. Set up an
identity provider for the OpenShift Container Platform cluster. For
more information about the management, see User management.
- To verify the installation, see Validating IBM Fusion HCI installation.
- To add more nodes and configure storage, see Scaling the cluster.
- For high-availability cluster, go to IBM Fusion HCI
user interface and view rack details. For the procedure to view rack details, see Adding expansion racks.
- Configure Global Data Platform service or Fusion Data Foundation service before you install additional
services. IBM Fusion HCI does not support both Global Data Platform and Fusion Data Foundation services to coexist.
Important:
- The high-availability setup of IBM Fusion HCI does
not support Fusion Data Foundation storage option.
- If either Global Data Platform or Fusion Data Foundation are not configured or are configured but are
not in a healthy state, then a few pods, like logcollector, might remain in a pending state. The
pods come into a running state automatically after the storage is configured and is in a healthy
state.
For the procedure to install and work with services, see Fusion services.
- If your cluster is using Fusion Data Foundation for
storage, ensure you specify the appropriate storage class as mentioned in the following snippet:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: image-registry-storage
spec:
storageClassName: ocs-storagecluster-cephfs
accessModes:
- ReadWriteMany
resources:
requests:
storage: <desired-storage-size>
Replace desired-storage-size
with the amount of storage you want to
allocate. For example, 500Gi, 1Ti.For Global Data Platform, you can proceed with the default storage class that is available on your cluster.
- Optionally, after the storage is available, configure the OpenShift Container Platform internal image registry. For the procedure
to configure, see Changing the image registry’s management state section and
Configuring registry storage for bare metal and other manual installations section of
OpenShift documentation.
Run the following command to make
this registry accessible outside the cluster.
oc patch configs.imageregistry.operator.openshift.io/cluster --patch '{"spec":{"defaultRoute":true}}' --type=merge
- If you want to install Noobaa service, then install Red Hat OpenShift Data Foundation and deploy it as Multicloud Object Gateway
(MCG) only mode to provide object service. For the procedure, see Installing Data Foundation for MCG only.
- In a Metro-DR setup, run the following
command on both site1 and site2 when stage 2 on site 2 is complete:
- Site 1
-
oc -n ibm-spectrum-scale-csi create configmap ibm-spectrum-scale-csi-config --from-literal=VAR_DRIVER_DISCOVER_CG_FILESET=DISABLED
- Site 2
-
create namespace oc new-project ibm-spectrum-scale-csi
oc -n ibm-spectrum-scale-csi create configmap ibm-spectrum-scale-csi-config --from-literal=VAR_DRIVER_DISCOVER_CG_FILESET=DISABLED