Connecting to remote IBM Storage Scale file systems

In IBM Storage Fusion, from a containerized workload, you can access and store data from existing IBM Storage Scale clusters.

Before you begin

Firewall requirements
Important: For details of IBM Storage Scale container native storage access firewall requirements, see Network and firewall requirements.
Network planning and prerequisites
For network planning and prerequisites for remote mount support, see Network planning topic.
  • The IBM Storage Fusion admin must exchange information with the IBM Storage Scale admin, and provide instructions on how the IBM Storage Scale admin must configure the file system sharing on the IBM Storage Scale side.
    • As a OpenShift® Container Platform administrator, confirm whether you configured IBM Storage Scale remote file system mounts during the installation of IBM Storage Fusion. You can check whether the IBM Storage Fusion user interface displays a Remote file systems menu option in the left-navigation.

      To determine whether the appropriate remote mount service is installed, check the Services page.

    • Important: The IBM Storage Scale administrator must do a remote filesystem configuration on the IBM Storage Scale Container Storage Interface Driver storage cluster by following steps 1 to 6 of Storage cluster configuration for Container Storage Interface (CSI) on non-AWS environment (like on-premises).
    • The IBM Storage Scale administrator in turn provides you the necessary information about cluster, file system, encryption algorithms. For instructions to share, click Instructions link in the Add IBM Storage Scale file system slide out pane. For the Instructions link, see Storage cluster.
      Note: The details include host name, port, Cluster ID, IBM Storage Scale certificate, user name, and password.
    • You can enter cluster access credentials information that is provided by the IBM Storage Scale admin into the IBM Storage Fusion user interface, or place it into a CR by using YAML.
    • Run the following precheck commands on the IBM Storage Scale GUI node:
      • Check whether the user that does the remote mount configuration has correct access rights:
        /usr/lpp/mmfs/gui/cli/lsuser | grep ContainerOperator | grep CsiAdmin

        By default, the user passwords expire after 90 days. If the security policy of your organization permits it, use the -e 1 option on the mkuser command to create a user with a password that does not expire. Also, create a unique GUI user for each OpenShift Container Platform cluster on the same remote Scale cluster.

        If not present, create the user with both ContainerOperator and CsiAdmin roles.
        /usr/lpp/mmfs/gui/cli/mkuser csi-storage-gui-user -p passw0rd -g CsiAdmin,ContainerOperator
      • Run the following command to ensure all the nodes of your cluster are active:
        /usr/lpp/mmfs/bin/mmgetstate -a
        Sample output:
        Node number  Node name  GPFS state
        
                   1           host1      active
        
                   2           host2      active
        
                   3           host3      active

        If any of the node is down, analysis the issue and start the node.

      • Run the following command to validate whether the gpfsgui service is running:
        systemctl status  gpfsgui  
        If gui service has issues, stop and start the service:
        
        systemctl stop gpfsgui.service
        systemctl start gpfsgui.service

        Log in to the GUI of scalecluster by using the validated username and password.

      • To get more information about the cluster, run /usr/lpp/mmfs/bin/mmlscluster command.

About this task

All the IBM Storage Scale commands are run by the IBM Storage Scale administrator on the remote IBM Storage Scale cluster.
/usr/lpp/mmfs/bin/xxx

Procedure

  1. Click Storage > Remote file systems menu in the IBM Storage Fusion user interface.
  2. Click Add file system.
    The Add IBM Spectrum Scale file system slide out pane gets displayed.
  3. Enter the following details to add one or more file systems.
    You can add one or more file systems of the existing cluster or choose to connect to a new cluster.
    Note: The OpenShift Container Platform admin needs the IBM Storage Scale admin to provide all of this information. The CLI commands referenced in this topic are run by the IBM Storage Scale admin on the remote IBM Storage Scale cluster.
    • Connect to a new cluster option:
      1. Enter the following cluster access credentials for IBM Storage Scale:
        Host name
        The host name is the GUI REST API endpoint. It needs to have a user with ContainerOperator and CsiAdmin roles. You can enter up to a maximum of three hosts.
        Port
        The port is the GUI port, that is, the REST API endpoint port. By default, the port is 443. If the remote cluster uses other ports, you can provide it.
        Cluster ID
        Enter the ID of the cluster. Contact your IBM Storage Scale administrator to get the cluster ID.
        IBM Spectrum Scale certificate
        Enter the IBM Storage Scale root CA. If you provide the root CA value, the IBM Storage Fusion verifies the certificate chain and hostname of the IBM Storage Scale cluster. If you do not provide the root CA value, the IBM Storage Fusion skips the verification step.
        User name
        Enter the GUI user name of the cluster.
        Note: Ensure that this user has both container operator permissions and IBM Storage Scale Container Storage Interface Driver administrator permissions.
        Password
        Enter the password for the GUI user name.
      2. Define up to three nodes on the IBM Storage Scale cluster to establish connection. These nodes are contact nodes, which is used to communicate with remote storage cluster. The projects and data get distributed between these nodes.
        Run the following command to get node details:
        /usr/lpp/mmfs/bin/mmlscluster
      3. Enter the name and IP address for node 1, node 2, and node 3.
        Note: The name value must consist of lower case alphanumeric characters, '-' or '.', and must start and end with an alphanumeric character.
      4. Enter a Subnet value. It is mandatory for remote file systems whenever Global Data Platform is the local storage. This field takes IP address in CIDR notation. For example, 100.8.9.10./32.
        Note: You cannot edit the Subnet value after you add.
      5. Enter the following details to identify the file system or system:
        • File system name - The name helps to identify the file system or systems that can be accessed through IBM Storage Fusion. Run the following command to get the file system name:
          /usr/lpp/mmfs/bin/mmlsconfig

          The file system name must be an existing file system on the IBM Storage Scale cluster that you want to remote mount.

        • Storage class name - Enter a unique storage class name to add to this file system. The Storage class must not exist in the Red Hat® OpenShift cluster.
        Note: Click Add to add more than one file system from this cluster. You can click the remove icon to delete a file system.
    • If you choose Select an existing connect cluster, then select the host name from the Select a cluster drop-down list. The cluster details are displayed, such as host name and name of the connected file system. Enter the following details to identify the file system:
      • File system name - The name helps to identify the file system or systems that can be accessed through IBM Storage Fusion. Run the following command to get the file system name:
        /usr/lpp/mmfs/bin/mmlsconfig

        The file system name must be an existing file system on the IBM Storage Scale cluster that you want to remote mount.

      • Storage class name - Enter a unique storage class name to add to this file system. The Storage class must not exist in the Red Hat OpenShift cluster.

      Click Add to include more than one file system from this cluster. You can click the remove icon to delete a file system.

  4. Click Add file system.
    An information message informs you that the file system is connecting. After the connection is complete, a success message appears to confirm that the file system is connected. The new connection gets added to the Remote file system list.