Connecting to remote IBM Storage Scale file systems
In IBM Storage Fusion, from a containerized workload, you can access and store data from existing IBM Storage Scale clusters.
Before you begin
- Firewall requirements
-
Important: For details of IBM Storage Scale container native storage access firewall requirements, see Network and firewall requirements.
- Network planning and prerequisites
- For network planning and prerequisites for remote mount support, see Network planning topic.
- The IBM Storage Fusion admin must exchange
information with the IBM Storage Scale admin, and
provide instructions on how the IBM Storage Scale
admin must configure the file system sharing on the IBM Storage Scale side.
- As a OpenShift® Container Platform
administrator, confirm whether you configured IBM Storage Scale remote file system mounts during
the installation of IBM Storage Fusion. You can check
whether the IBM Storage Fusion user interface displays a
Remote file systems menu option in the left-navigation.
To determine whether the appropriate remote mount service is installed, check the Services page.
- Important: The IBM Storage Scale administrator must do a remote filesystem configuration on the IBM Storage Scale Container Storage Interface Driver storage cluster by following steps 1 to 6 of Storage cluster configuration for Container Storage Interface (CSI) on non-AWS environment (like on-premises).
- The IBM Storage Scale administrator in turn
provides you the necessary information about cluster, file system, encryption algorithms. For
instructions to share, click Instructions link in the Add IBM
Storage Scale file system slide out pane. For the Instructions
link, see Storage cluster. Note: The details include host name, port, Cluster ID, IBM Storage Scale certificate, user name, and password.
- You can enter cluster access credentials information that is provided by the IBM Storage Scale admin into the IBM Storage Fusion user interface, or place it into a CR by using YAML.
- Run the following precheck commands on the IBM Storage Scale GUI node:
- Check whether the user that does the remote mount configuration has correct access rights:
/usr/lpp/mmfs/gui/cli/lsuser | grep ContainerOperator | grep CsiAdmin
By default, the user passwords expire after 90 days. If the security policy of your organization permits it, use the
-e 1
option on themkuser
command to create a user with a password that does not expire. Also, create a unique GUI user for each OpenShift Container Platform cluster on the same remote Scale cluster.If not present, create the user with both ContainerOperator and CsiAdmin roles./usr/lpp/mmfs/gui/cli/mkuser csi-storage-gui-user -p passw0rd -g CsiAdmin,ContainerOperator
- Run the following command to ensure all the nodes of your cluster are active:
/usr/lpp/mmfs/bin/mmgetstate -a
Sample output:Node number Node name GPFS state 1 host1 active 2 host2 active 3 host3 active
If any of the node is down, analysis the issue and start the node.
- Run the following command to validate whether the gpfsgui service is running:
systemctl status gpfsgui
If gui service has issues, stop and start the service:systemctl stop gpfsgui.service systemctl start gpfsgui.service
Log in to the GUI of
scalecluster
by using the validated username and password. - To get more information about the cluster, run /usr/lpp/mmfs/bin/mmlscluster command.
- Check whether the user that does the remote mount configuration has correct access rights:
- As a OpenShift® Container Platform
administrator, confirm whether you configured IBM Storage Scale remote file system mounts during
the installation of IBM Storage Fusion. You can check
whether the IBM Storage Fusion user interface displays a
Remote file systems menu option in the left-navigation.
About this task
All the IBM Storage Scale commands are run by
the IBM Storage Scale administrator on the remote
IBM Storage Scale
cluster.
/usr/lpp/mmfs/bin/xxx