Installing IBM Storage Fusion HCI System

IBM Storage Fusion HCI System comes with the bootstrapping software that is installed from the factory. The IBM support representative completes the initial verification and physically connects the system to network and power. Then, they conduct the network setup of the appliance, which connects the appliance to the data center network. This procedure configures all the default nodes (three controllers and three compute nodes). If you ordered more nodes, then they get installed as well. Use this information to install IBM Storage Fusion HCI System. Finally, Storage and Backup software get installed on these nodes.

Before you begin

Procedure

  1. Go to the URL provided by your IBM Systems Support Representative (SSR).
    The format of the URL is http://<host IP address>:3000/isfsetup.
    Note: If you close your browser anytime before the completion of stage 2, then open the URL again to go back to its last state.
    Important: Ensure that you do not add, change, or remove files during provisioning the node.
  2. Accept the license.
    After you accept the license, a network precheck runs to validate whether the appliance is correctly configured for DHCP and DNS.
  3. Configure network:
    The Network precheck page runs an automatic network check against all nodes in the appliance. It checks whether each node has an assigned IP address and hostname.
    1. Check whether you can see all your nodes in Connected state.
      In a high-availability cluster, it displays all the nodes present in all three racks. If you observe all your nodes in connected state, go to step 4. Any nodes that do not pass the network check shows in Disconnected status.
    2. If one or more nodes are in a Disconnected state, do the following steps:
      This error means that either DHCP or DNS is not configured for the node. For more information about the prerequisite, see Setting up the DNS and DHCP for IBM Storage Fusion HCI System.
    3. Work with your network team and confirm that the DNS and DHCP are configured for all nodes in the appliance.
    4. After you make all the changes to DHCP or DNS, click Restart precheck to initiate a new network check.
    5. Click Next.
  4. In the OpenShift version page, select the version of the OpenShift and click Next.
    Important: If you select version 4.15, then you cannot install the Data Cataloging service.
  5. Set up your image registry:
    If you want to use your private image registry, you can install both Red Hat OpenShift and IBM Storage Fusion HCI System software from images, which are maintained in a container registry that you manage. If you do not want to use your enterprise registry, then skip this step to install Red Hat OpenShift Container Platform and IBM Storage Fusion software by using the images that are hosted in the Red Hat and IBM entitled registries.
    Important: Support is available for self signed, custom certificates, and certificates signed by a well known CA authority.
    1. Choose whether to use the Public image registry or Private image registry option.
      Choice Procedure
      Public image registry To use the public image registry, you need a pull secret and an entitlement key.
      • Enter the Pull secret. It is an authorization token that stores Docker credentials that you can use to access a registry. Your cluster needs this secret to access and pull OpenShift images from the quay.io container registry. If you do not have a pull secret, click Get Pull secret. It takes you to https://cloud.redhat.com/openshift/install/pull-secret.
      • Enter the Entitlement key. It is a product code that is used to pull images from IBM Entitlement Registry. Your cluster needs this key to gain access to IBM Storage Fusion images in the IBM Entitlement Registry. If you do not have a key, click Get Entitlement key. It takes you to IBM Container Library. For steps to obtain the key, see the Activating IBM Storage Fusion HCI System Software to be downloaded.
      Private image registry If you select the Private image registry, then first mirror the Red Hat and IBM Storage Fusion images to your private registry. For more information about mirroring, see Mirroring your images to the enterprise registry.
      You can choose to host the Red Hat and IBM Storage Fusion images in separate repositories, or use the same repository.
      • Single repository
        Enter the following details for the enterprise registry.
        • Enter the URL of the private registry in the Repository path.
          For example,
          https://<enterprise registry>:<custom port>/<mirrorpath>
          If you want to use custom port, then provide the custom port details.
        • Enter the Username for the private registry.
        • Enter the API key/ Password for the private registry.
        • For OpenShift images repository path > Registry certificate (Optional) section, select whether you want to upload the certificate for your private OpenShift images registry. If you select the option, then enter the following details:
          • Select whether the file upload method is File upload or Text input
          • If you select File upload, drag and drop your file in the specified box. The maximum size of the file must be 1 MB and the supported file type is crt. Enter the Private key (PEM format, unencrypted)
          • If you select Text input, enter the Custom certificate and Private key (PEM format, unencrypted)
      • Multiple repositories

        Enter the following details for both OpenShift images repository and IBM Storage Fusion images repository:

        • Enter the URL of the respective private image registry OpenShift images repository path or IBM Storage Fusion images repository path in the Repository path field.
          For example, URLs for OpenShift and IBM Storage Fusion images repository paths:
          https://<enterprise registry for IBM Spectrum Fusion>:<custom port>/<mirrorpath>
          or 
          https://<enterprise registry for Red Hat OpenShift>:<custom port>/<mirrorpath>
          See the following sample values:
          https://registryhost.com:443/fusion-mirror
          or
          https://registryhost.com:443/mirror-ocp

          If you use anything other than default port (443), then provide the custom port.

        • Enter the Username for the private registry. Make sure that this user has access to the private registry.
        • Enter the API key/ Password for the private registry.
        • For OpenShift images repository path > Registry certificate (Optional) section, select whether you want to upload the certificate for your private OpenShift images registry. If you select the option, then enter the following details:
          • Select whether the file upload method is File upload or Text input
          • If you select File upload, drag and drop your file in the specified box. The maximum size of the file must be 1 MB and the supported file type is crt. Enter the Private key (PEM format, unencrypted)
          • If you select Text input, enter the Custom certificate and Private key (PEM format, unencrypted)
          Note: The certificate and key are mandatory only for self-signed certificate registry.
    2. If you need to use a proxy to connect to the external network, select the Connect through a proxy option. To connect through a proxy, enter the URL for the proxy server in the Host address field. If your proxy requires authentication, then enter a Username and Password.
    3. Click Next to go to the Network settings page.
  6. Configure network - OpenShift network and storage network.
    The Network settings page shows the OpenShift network to setup OpenShift, as well as the storage configuration for the internal storage network of IBM Storage Fusion.
    To enter values and customize network, see Network configuration.
  7. Optional: Ingress certificate:
    The Ingress certificate page allows you to optionally configure a custom certificate for OpenShift.
    1. Click the Configure now toggle to off to configure OpenShift with a self-signed certificate.
      By default, the Configure now toggle is set to on. However, it is recommended that you upload a certificate that is provided by Certificate Authority (CA). Applying a custom certificate during the installation ensures that the certificate is used immediately by OpenShift. If you do not apply custom certificate during installation, then you can do it later from OpenShift. For more information about how to apply custom certificate from OpenShift, see Ingress Operator in OpenShift.
    2. Drag and drop to upload a .crt file of a size that does not exceed 1 MB or enter the details as text input.
    3. Enter the Private key and click Next. The OpenShift initialization page gets displayed.
    4. Click Finish.
  8. Initializing OpenShift cluster and next steps.
    The final step of this phase of installation is to create a three node Red Hat OpenShift cluster. This minimal cluster is used in the next phase of the installation to orchestrate building out the cluster and configuring the Global Data Platform for IBM Storage Fusion HCI System.
    1. Monitor the progress of the OpenShift cluster.
      In case of failures, collect logs to analyze the errors. Click Learn more link to see details about the error and steps to troubleshoot the issue. After you fix the issue, click Retry. If you need to change any information entered in previous install steps, click Change install settings.
      After the OpenShift cluster gets successfully created, you can view the credentials for the OpenShift cluster.
    2. Click Download Password and CoreOS key to download and save the kubeadmin password and CoreOS key.
      Note down the credentials before the installation proceeds with the next phase as you cannot access the OpenShift cluster without these credentials. It is also required for debug and system recovery.
    3. To retrieve the password from the downloaded file in the future, run the following commands:
      1. Go to the Downloads folder:
        cd ~/Downloads
      2. List the files in the folder:
        ls -ltr
      3. Extract the contents of ocpkeys compressed file.
      4. Go to the auth folder:
        cd clusterconfigs/auth
      5. Open kubeadmin-password in edit mode and copy the password:
        vi kubeadmin-password
      6. Go to the extracted folder /install, save the CoreOS Key:
        id_rsa
      7. In the installation folder, id_rsa is a CoreOS key that can be used to connect to CoreOS nodes.
    4. In the Login section, copy the Username and Password and save it. Use it to log in to IBM Storage Fusion HCI System and Red Hat OpenShift. These credentials are configured as single sign-on between Red Hat OpenShift and IBM Storage Fusion.
      Note: After you save the password and download the ocpkey.zip file, the URL points you to the OCP address. If your URL does not automatically point to the OCP address, then check the Network Preparation tab in your TDA installation worksheet to ensure that the DNS wildcard domain name is added to the DNS server. Test your connectivity with https://DNSentryingressendpointipaddress.
    5. Click IBM Storage Fusion to go to the IBM Storage Fusion HCI System user interface.
      The login page of the IBM Storage Fusion HCI System console displays in a new browser tab.
    6. Enter the credentials that you noted down and click Log in to resume with the installation.
    Note: If there are any issues to access the console, then verify that your DNS server has a wildcard DNS A/AAAA or CNAME record that refers to OpenShift ingress. Test your connectivity <cluster_name>.<base_domain> with https://console-ibm-storage-fusion-ns.apps.<cluster_name>.<base_domain> .

    If you encounter errors in the OpenShift installation wizard, see Installation and upgrade issues. If you encounter errors in the Provisioning and software installation wizard, check the logs. For more information about accessing these logs, see Collecting log files of final installation.

What to do next

  • Set up a non-kubeamdin cluster-admin user and disable the temporary kubeadmin user. Set up an identity provider for the OpenShift Container Platform cluster. For more information about the management, see User management.
  • To verify the installation, see Validating IBM Storage Fusion HCI System installation.
  • To add more nodes and configure storage, see Scaling the cluster.
  • For high-availability cluster, go to IBM Storage Fusion HCI System user interface and view rack details. For the procedure to view rack details, see Adding expansion racks.
  • In 2.8.0, configure Global Data Platform storage service. In 2.8.1, configure Global Data Platform service or Fusion Data Foundation service before you install additional services. IBM Storage Fusion HCI System does not support both Global Data Platform and Fusion Data Foundation services to coexist.
    Important:
    • The high-availability setup of IBM Storage Fusion HCI System does not support Fusion Data Foundation storage option.
    • If either Global Data Platform or Fusion Data Foundation are not configured or are configured but are not in a healthy state, then a few pods, like logcollector, might remain in a pending state. The pods come into a running state automatically after the storage is configured and is in a healthy state.

    For the procedure to install and work with services, see IBM Storage Fusion services.

  • Optionally, after the storage is available, configure the OpenShift Container Platform image registry. For the procedure to configure, see Changing the image registry’s management state section and Configuring registry storage for bare metal and other manual installations section of OpenShift documentation.
    Run the following command to make this registry accessible outside the cluster.
    oc patch configs.imageregistry.operator.openshift.io/cluster --patch '{"spec":{"defaultRoute":true}}' --type=merge
  • In 2.8.1, if you want to install Noobaa service, then install Red Hat OpenShift Data Foundation and deploy it as Multicloud Object Gateway (MCG) only mode to provide object service. For the procedure, see Data Foundation for MCG only.
  • In a Metro-DR setup, run the following command on both site1 and site2 when stage 2 on site 2 is complete:
    Site 1
    oc -n ibm-spectrum-scale-csi create configmap ibm-spectrum-scale-csi-config --from-literal=VAR_DRIVER_DISCOVER_CG_FILESET=DISABLED
    Site 2
    create namespace oc new-project ibm-spectrum-scale-csi
    
    oc -n ibm-spectrum-scale-csi create configmap ibm-spectrum-scale-csi-config --from-literal=VAR_DRIVER_DISCOVER_CG_FILESET=DISABLED