Connecting to remote IBM Storage Scale file systems

In IBM Storage Fusion, from a containerized workload, you can access and store data from existing IBM Storage Scale clusters.

Before you begin

Firewall requirements
  • For details of IBM Spectrum Scale container native storage access firewall requirements, see Network and firewall requirements.
  • Grant Amazon Web Services Storage Cluster filesystem to be accessible for IBM Storage Fusion in ROSA environment, see AWS storage cluster.
Supported networking method
The supported networking method for IBM Spectrum Scale container native storage access is CNI plug-in for the core pods. As an admin, configure a suitable CNI plugin. For more information about configuration, see Container Network Interface (CNI) configuration.
  • The IBM Storage Fusion admin must exchange information with the IBM Storage Scale admin, and provide instructions on how the IBM Storage Scale admin must configure the file system sharing on the IBM Storage Scale side.
    • As a OpenShift® Container Platform administrator, confirm whether you configured IBM Storage Scale remote file system mounts during the installation of IBM Storage Fusion. You can check whether the IBM Storage Fusion user interface displays a Remote file systems menu option in the left-navigation. Alternatively, run the following command to check whether the Global Data Platform value is set to true:
      oc describe spectrumfusion/spectrumfusion -n <fusion_namespace>
      A sample output that confirms the Global Data Platform value:
      
      Spec:
        Data Protection:
          Enable:  false
        Global Data Platform:
          Enable:  true
        Open Shift Data Foundation:
          Auto Upgrade:  false
          Enable:        false
        License:
          Accept:  true

      If the value of the Global Data Platform: Enable is false, then enable it from the IBM Storage Fusion user interface. For the procedure, see Global Data Platform

    • The IBM Storage Scale administrator must do a remote filesystem configuration on the IBM Storage Scale Container Storage Interface Driver storage cluster by following steps 1 to 6 of Storage cluster configuration for Container Storage Interface (CSI) on non-AWS environment (like on-premises).
    • The IBM Storage Scale administrator in turn provides you the necessary information about cluster, file system, encryption algorithms. For instructions to share, click Instructions link in the Add IBM Spectrum Scale file system slide out pane. For the Instructions link, see Storage cluster.
      Note: The details include host name, port, Cluster ID, IBM Storage Scale certificate, user name, and password.
    • You can enter cluster access credentials information that is provided by the IBM Storage Scale admin into the IBM Storage Fusion user interface, or place it into a CR by using YAML.
    • Run the following precheck commands:
      • Check whether the user that does the remote mount configuration has correct access rights:
        /usr/lpp/mmfs/gui/cli/lsuser | grep ContainerOperator | grep CsiAdmin

        By default, the user passwords expire after 90 days. If the security policy of your organization permits it, use the -e 1 option on the mkuser command to create a user with a password that does not expire. Also, create a unique GUI user for each OpenShift Container Platform cluster on the same remote Scale cluster.

        If not present, create the user with both ContainerOperator and CsiAdmin roles.
        /usr/lpp/mmfs/gui/cli/mkuser csi-storage-gui-user -p passw0rd -g CsiAdmin,ContainerOperator
      • Run the following command to ensure all the nodes of your cluster are active:
        /usr/lpp/mmfs/bin/mmgetstate -a
        Sample output:
        Node number  Node name  GPFS state
        
                   1           host1      active
        
                   2           host2      active
        
                   3           host3      active

        If any of the node is down, analysis the issue and start the node.

      • Run the following command to validate whether the gpfsgui service is running:
        systemctl status  gpfsgui  
        If gui service has issues, stop and start the service:
        
        systemctl stop gpfsgui.service
        systemctl start gpfsgui.service

        Log in to the GUI of scalecluster by using the validated username and password.

      • To get more information about the cluster, run /usr/lpp/mmfs/bin/mmlscluster command.

About this task

All the IBM Storage Scale commands are run by the IBM Storage Scale administrator on the remote IBM Storage Scale cluster.
/usr/lpp/mmfs/bin/xxx

Procedure

  1. Click Remote file systems menu in the IBM Storage Fusion user interface.
  2. Click Add file system.
    The Add IBM Spectrum Scale file system slide out pane gets displayed.
  3. Enter the following details to add one or more file systems.
    You can add one or more file systems of the existing cluster or choose to connect to a new cluster.
    Note: The OpenShift Container Platform admin needs the IBM Storage Scale admin to provide all of these information. The CLI commands referenced in this topic are run by the IBM Storage Scale admin on the remote IBM Storage Scale cluster.
    • Connect to a new cluster option:
      1. Enter the following cluster access credentials for IBM Storage Scale:
        • Host name
          The host name is the GUI REST API endpoint. It needs to have a user with ContainerOperator and CsiAdmin roles.
        • Port
          The port is the GUI port, that is, the REST API endpoint port. By default, the port is 443. If the remote cluster uses other ports, you can provide it.
        • Cluster ID
          Enter the ID of the cluster. Contact your IBM Storage Scale administrator to get the cluster ID.
        • IBM Spectrum Scale certificate
          Enter the IBM Storage Scale root CA. If you provide the root CA value, the IBM Storage Fusion verifies the certificate chain and hostname of the IBM Storage Scale cluster. If you do not provide the root CA value, the IBM Storage Fusion skips the verification step.
        • User name
          Enter the GUI user name of the cluster.
          Note: Ensure that this user has both container operator permissions and IBM Storage Scale Container Storage Interface Driver administrator permissions.
        • Password
          Enter the password for the GUI user name.
      2. Select two nodes on the IBM Storage Scale cluster to establish connection. The two nodes are contact nodes, which is used to communicate with remote storage cluster. The projects and data get distributed between these two nodes.
        Run the following command to get node details:
        /usr/lpp/mmfs/bin/mmlscluster
      3. Enter the name and IP address for node 1 and node 2.
        Note: The name value must consist of lower case alphanumeric characters, '-' or '.', and must start and end with an alphanumeric character.
      4. Enter the following details to identify the file system or system:
        • File system name - The name helps to identify the file system or systems that can be accessed through IBM Storage Fusion. Run the following command to get the file system name:
          /usr/lpp/mmfs/bin/mmlsconfig

          The file system name must be an existing file system on the IBM Storage Scale cluster that you want to remote mount.

        • Storage class name - Enter a unique storage class name to add to this file system. The Storage class must not exist in the Red Hat® OpenShift cluster.
        • Select Encryption algorithm and Encryption Tenant. First, configure encryption to get these values. For more information about how to configure encryption from IBM Storage Fusion user interface, see Configuring encryption for Global Data Platform storage.
        Note: Click Add to add more than one file system from this cluster. You can click the remove icon to delete a file system.
    • If you choose Select an existing connect cluster, then select the host name from the Select a cluster drop-down list. The cluster details are displayed, such as host name, Encryption tenant ID, and name of the connected file system. Enter the following details to identify the file system:
      • File system name - The name helps to identify the file system or systems that can be accessed through IBM Storage Fusion. Run the following command to get the file system name:
        /usr/lpp/mmfs/bin/mmlsconfig

        The file system name must be an existing file system on the IBM Storage Scale cluster that you want to remote mount.

      • Storage class name - Enter a unique storage class name to add to this file system. The Storage class must not exist in the Red Hat OpenShift cluster.
      • Select Encryption algorithm and Encryption Tenant. First, configure encryption to get these values. For more information about how to configure encryption from IBM Storage Fusion user interface, see Configuring encryption for Global Data Platform storage.

      Click Add to include more than one file system from this cluster. You can click the remove icon to delete a file system.

  4. Click Add file system.
    A success message appears to confirm that the file system is connected. The new connection gets added to the Remote file system list.