Running the ConnectToExternalGPFSServer
script
package
Run the script package to connect the IBM Storage Scale Client to the external IBM Storage Scale Server.
Before you begin
Procedure
- Click Patterns > Virtual System Instances.
- Select the virtual system instance from the list of available virtual system instances.
- In the details of the selected instance, expand the Virtual machine perspective section.
- Expand the node of the virtual machine on which you want to run the script package.
- In the Script Packages section,
select Execute now for the ConnectToExternalGPFSServer script
package. A dialog is displayed prompting you for the parameters for the script package and root credentials for the virtual machine:
where:SERVER_CLUSTER_NAME = <server cluster name>, such as
Primary_Cluster.purescale.raleigh.ibm.com
NODE_LIST = <node1_ipaddr,node2_ipaddr, ...> FILE_SYSTEM = <file system name that the client mounts> FILE_SET = <the fileset to be used by the client> LINK_DIR = <the link directory to create on the client> User name = root Password = <root password>- To find the SERVER_CLUSTER_NAME value, on any of the server nodes,
run the following command:
/usr/lpp/mmfs/bin/mmlscluster
- The FILE_SYSTEM and the FILE_SET values must be the same as the ones used in Exchanging key information.
- To find the SERVER_CLUSTER_NAME value, on any of the server nodes,
run the following command:
- Wait for the script package to complete. A green square with a check mark indicates success. The result is written to the remote_std_out.log file, similar to the following example:
cluster name: Primary_Cluster.purescale.raleigh.ibm.com node list: 172.17.69.91 file system: fileSystemName mountpoint: /gpfs/fileSystemName fileset: testExtFSet1b link directory: /testExtFSet1bLink /usr/lpp/mmfs/bin/mmremotecluster add Primary_Cluster.purescale.raleigh.ibm.com -n 172.17.69.91 -k /var/mmfs/ssl/Primary_Cluster.purescale.raleigh.ibm.com_id_rsa.pub RC is: 0 Message(1) is: Message(2) is: mmremotecluster: Command successfully completed /usr/lpp/mmfs/bin/mmremotefs add fileSystemName -f fileSystemName -C Primary_Cluster.purescale.raleigh.ibm.com -A yes -T /gpfs/fileSystemName RC is: 0 Message(1) is: Message(2) is: /usr/lpp/mmfs/bin/mmmount fileSystemName RC is: 0 Message(1) is: Tue Jul 1 21:42:10 UTC 2014: mmmount: Mounting file systems ... Message(2) is: ln -s /gpfs/fileSystemName/testExtFSet1b /testExtFSet1bLink RC is: 0 Message(1) is: Message(2) is: /usr/lpp/mmfs/bin/mmremotecluster show RC is: 0 Message(1) is: Cluster name: Primary_Cluster.purescale.raleigh.ibm.com Contact nodes: 172.17.69.91 SHA digest: 9a366c1a33794b8035192bf832389a397cd9330a File systems: fileSystemName (fileSystemName) Message(2) is: /usr/lpp/mmfs/bin/mmremotefs show RC is: 0 Message(1) is: Local Name Remote Name Cluster name Mount Point Mount Options Automount Drive Priority fileSystemName fileSystemName Primary_Cluster.purescale.raleigh.ibm.com /gpfs/fileSystemName rw yes - 0 Message(2) is:
Verify that the output of the last two commands shows the remote server cluster and remote file system.