spectrumscale command
Installs and configures GPFS; adds nodes to a cluster; deploys and configures protocols and performance monitoring tools; configures call home and file audit logging; and upgrades GPFS and protocols.
Synopsis
spectrumscale setup [ -i SSHIdentity ] [ -s ServerIP ]
[ -st { "ss","SS","ess","ESS","ece","ECE" } ]
[ --storesecret ]
or
spectrumscale node add [-g] [-q] [-m] [-a] [-n] [-e] [-c] [-p] [-so] Node
or
spectrumscale node load [-g] [-q] [-m] [-a] [-n] [-e] [-c] [-p] [-so] NodeFile
or
spectrumscale node delete [-f] Node
or
spectrumscale node clear [-f]
or
spectrumscale node list
or
spectrumscale config gpfs [-l] [ -c ClusterName ]
[ -p { default | randomio | UserDefinedProfilePath } ]
[ -r RemoteShell ] [ -rc RemoteFileCopy ]
[ -e EphemeralPortRange ] [ -pn PORT_NUMBER ]
[ -g GPLBIN_DIR ]
or
spectrumscale config protocols [-l] [ -f FileSystem ] [ -i Interfaces ]
[ -m MountPoint ] [ -e ExportIPPool ]
or
spectrumscale config hdfs new -n Name -nn NameNodes -dn DataNodes
-f FileSystem -d DataDir
or
spectrumscale config hdfs import -l LocalDir
or
spectrumscale config hdfs add -n Name [ -nn NameNodes ] [ -dn DataNodes ]
or
spectrumscale config hdfs list
or
spectrumscale config hdfs clear [ -f ] [ -n Name ]
or
spectrumscale config perfmon [ -r { on | off } ] [ -d { on | off } ] [-l]
or
spectrumscale config clear { gpfs | protocols }
or
spectrumscale config update
or
spectrumscale config populate --node Node
or
spectrumscale config setuptype find
or
spectrumscale nsd add -p Primary [ -s Secondary ] [ -fs FileSystem ] [ -po Pool ]
[ -u { dataOnly | dataAndMetadata | metadataOnly | descOnly | localCache } ]
[ -fg FailureGroup ] [ --no-check ]
PrimaryDevice [ PrimaryDevice ... ]
or
spectrumscale nsd delete NSD
or
spectrumscale nsd modify [ -n Name ]
[ -u { dataOnly | dataAndMetadata | metadataOnly | descOnly | localCache } ]
[ -po Pool ] [ -fs FileSystem ] [ -fg FailureGroup ]
NSD
or
spectrumscale nsd servers { add | delete | setprimary } -s NSDServer NSDs
or
spectrumscale nsd clear [-f]
or
spectrumscale nsd list
or
spectrumscale filesystem modify [ -B { 64K | 128K | 256K | 512K | 1M | 2M | 4M | 8M | 16M } ]
[ -m MountPoint]
[ -r { 1 | 2 | 3 } ] [ -mr { 1 | 2 | 3 } ]
[ -MR { 1 | 2 | 3} ] [ -R { 1 | 2 | 3 } ]
[ --metadata_block_size {64K | 128K | 256K | 512K | 1M | 2M | 4M | 8M | 16M}]
[ --fileauditloggingenable
[ --degradedperformance] [ --degradedperformancedisable ] ]
[ --fileauditloggingdisable] [--logfileset LogFileset]
[ --retention RetentionPeriod] FileSystem
or
spectrumscale filesystem define [-fs FileSystem] -vs VdiskSet [--mmcrfs MmcrfsParams]
or
spectrumscale filesystem list
or
spectrumscale fileauditlogging enable
or
spectrumscale fileauditlogging disable
or
spectrumscale fileauditlogging list
or
spectrumscale recoverygroup define [ -rg RGName ] [ -nc ScaleOutNodeClassName ] --node Node
or
spectrumscale recoverygroup undefine RGName
or
spectrumscale recoverygroup change [ -rg NewRGName ] ExistingRGName
or
spectrumscale recoverygroup list
or
spectrumscale recoverygroup clear [-f]
or

spectrumscale remote_mount [-h]
{config -client-gui-username client gui username
-client-gui-password clientGuiPassword
-client-gui-hostname client gui hostname
-storage-gui-username storage gui username
-storage-gui-password storageGuiPassword
-storage-gui-hostname storage gui hostname
-remotemount-path remotemount filesystem path
-client-filesystem client filesystem name
-storage-filesystem storage filesystem name}

or

spectrumscale remote_mount precheck| grant | revoke | clear| list}

or
spectrumscale vdiskset define [ -vs VdiskSet ] [ -rg RGName ]
-code { 3WayReplication | 4WayReplication | 4+2P | 4+3P | 8+2P | 8+3P }
-bs { 256K | 512K | 1M | 2M | 4M | 8M | 16M }
-ss VdiskSetSize
or
spectrumscale vdiskset undefine VdiskSet
or
spectrumscale vdiskset clear [-f]
or
spectrumscale vdiskset list
or
spectrumscale callhome enable
or
spectrumscale callhome disable
or
spectrumscale callhome config -n CustName -i CustID -e CustEmail -cn CustCountry
[ -s ProxyServerIP ] [ -pt ProxyServerPort ]
[ -u ProxyServerUserName ] [ -pw ProxyServerPassword ] [-a]
or
spectrumscale callhome clear { --all | -n | -i | -e | -cn | -s | -pt | -u | -pw }
or
spectrumscale callhome schedule { -d | -w } [-c]
or
spectrumscale callhome list
or
spectrumscale enable { s3 | nfs | smb | hdfs }
or
spectrumscale disable { s3 | nfs | smb | hdfs }
or
spectrumscale install [-pr] [-po] [-s SecretKey] [-f] [--skip]
or
spectrumscale deploy [-pr] [-po] [-s SecretKey] [-f] [--skip]
or
spectrumscale upgrade precheck [ --skip ]
or
spectrumscale upgrade config offline [ -N Node | all] [ --clear ]
exclude [ -N Node ] [ --clear ]
list
clear
or
spectrumscale upgrade config workloadprompt -N { Node1,Node2,... | all [ --clear ] }
[ --list ] [ --clear ]
or
spectrumscale upgrade run [ --skip ]
or
spectrumscale upgrade postcheck [ --skip ]
or
spectrumscale upgrade showversions
or
spectrumscale scaleadmd enable
or
spectrumscale nodeid define [ --cert path to the certificate ] [ --key path to the private key] [ --chain path to the CA chain]
Availability
Available on all IBM Storage Scale editions.Description
- Install and configure GPFS.
- Add GPFS nodes to an existing cluster.
- Deploy and configure SMB, NFS, S3, HDFS, and performance monitoring tools on top of GPFS.
- Enable and configure the file audit logging function.
- Enable and configure the call home function.
- Configure recovery groups and vdisk sets, and define file systems for an IBM Storage Scale Erasure Code Edition environment.
- Upgrade IBM Storage Scale components.
- Perform offline upgrade of nodes that have services that are down or stopped.
- Exclude one or more nodes from the current upgrade run.
- Resume an upgrade run after a failure.
- The installation toolkit requires following packages:
- Python 3.6
- net-tools
- Ansible® 2.9.15
- TCP traffic from the nodes should be allowed through the firewall to communicate with the installation toolkit on port 10080 for package distribution.
- The nodes themselves have external Internet access or local repository replicas that can be reached by the nodes to install necessary packages (dependency installation). For more information, see the Repository setup section of the Installation prerequisites topic in IBM Storage Scale: Concepts, Planning, and Installation Guide.
- To install protocols, there must a GPFS cluster running a minimum version of 4.1.1.0 with CCR enabled.
- The node that you plan to run the installation toolkit from must be able to execute remote shell commands on any other node in the cluster without the use of a password and without producing any extraneous messages.
- Any node that is set up to be a call home node must have network connectivity to IBM® Support to upload data.
- Check whether passwordless SSH is set up between all admin nodes and all the other nodes in the cluster. If this check fails, a fatal error occurs.
- Check whether passwordless SSH is set up between all protocol nodes and all the other nodes in the cluster. If this check fails, a warning is displayed.
- Check whether passwordless SSH is set up between all protocol nodes in the cluster. If this check fails, a fatal error occurs.
Parameters
- setup
- Configures the installer node in the cluster definition file. The IP
address passed must be of the node from which the installation toolkit will be run. The SSH key passed must be the key
that the installation toolkit uses to have passwordless SSH onto
all other nodes. This is the first command you run to set up IBM Storage Scale. This option accepts the following arguments:
- -i SSHIdentity
- Adds the path to the SSH identity file into the configuration.
- -s ServerIP
- Adds the control node IP into the configuration.
- -st {"ss","SS","ess","ESS","ece","ECE"}
- Specifies the setup type. The allowed values are
ess
,ece
, andss
. The default value isss
.- If you are using the installation toolkit in a cluster containing ESS, specify the setup type as
ess
. - If you are using the installation toolkit in an
IBM Storage Scale Erasure Code Edition cluster, specify the setup type as
ece
. - The setup type
ss
specifies an IBM Storage Scale cluster containing no ESS nodes.
Regardless of the mode, the installation toolkit contains safeguards to prevent changing of a tuned ESS configuration. While adding a node to the installation toolkit, it looks at whether the node is currently in an existing cluster and if so, it checks the node class. ESS I/O server nodes are detected based upon existence within thegss_ppc64
node class. ESS EMS nodes are detected based upon existence within theems
node class. ESS I/O server nodes are not allowed to be added to the installation toolkit and must be managed by the ESS toolsets contained in the EMS node. A single ESS EMS node is allowed to be added to the installation toolkit. Doing so adds this node as an admin node of the installation toolkit functions. While the installation toolkit runs from a non-ESS node, it uses the designated admin node (an EMS node in this case) to run mm commands on the cluster as a whole. Once in the ESS mode, the following assumptions and restrictions apply:- File audit logging is not configurable using the installation toolkit.
- Call home is not configurable using the installation toolkit
- EMS node will be the only admin node designated in the installation toolkit. This designation will automatically occur when the EMS node is added.
- EMS node will be the only GUI node allowed in the installation toolkit. Additional existing GUI nodes can exist but they cannot be added.
- EMS node will be the only performance monitoring collector node allowed within the installation toolkit. Additional existing collectors can exist but they cannot be added.
- EMS node cannot be designated as an NSD or a protocol node.
- I/O server nodes cannot be added to the installation toolkit. These nodes must be managed outside the installation toolkit by ESS toolsets contained in the EMS node.
- NSDs and file systems managed by the I/O server nodes cannot be added to the installation toolkit.
- The cluster name is set upon addition of the EMS node to the installation toolkit. It is determined by mmlscluster being run from the EMS node.
- EMS node must have passwordless SSH set up to all nodes, including any protocol, NSD, and client nodes being managed by the installation toolkit.
- EMS node can be a different architecture or operating system than the protocol, NSD, and client nodes being managed by the installation toolkit.
- If the config populate function is used, an EMS node of a different architecture or operating system than the protocol, NSD, and client nodes can be used.
- If the config populate function is used, a mix of architectures within the non-ESS nodes being
added or currently within the cluster cannot be used. To handle this case, use the installation
toolkit separately for each architecture grouping. Run the installation toolkit from a node with
similar architecture to add the required nodes. Add the EMS node and use the setup type
ess
.
- If you are using the installation toolkit in a cluster containing ESS, specify the setup type as
- --storesecret
- Disables the prompts for the encryption secret. Generates a random encryption secret and stores
it in the cluster definition file. CAUTION:If you use this option, passwords will not be securely stored.
You can override the encryption secret stored in the cluster definition file by using the
-s SecretKey
option with the ./spectrumscale install or the ./spectrumscale deploy command.
- node
- Used to add, remove, or list nodes in the cluster definition file. This command only interacts with this
configuration file and does not directly configure nodes in the cluster itself. The nodes that have
an entry in the cluster definition file will be used during
install, deploy, or upgrade. This option accepts the following arguments:
- add Node
- Adds the specified node and configures it according to the following arguments:
- -g
- Adds GPFS Graphical User Interface servers to the cluster definition file.
- -q
- Configures the node as a quorum node.
- -m
- Configures the node as a manager node.
- -a
- Configures the node as an admin node.
- -n
- Specifies the node as NSD.
- -e
- Specifies the node as the EMS node of an ESS system. This node is automatically specified as the admin node.
- -c
- Specifies the node as a call home node.
- -p
- Configures the node as a protocol node.
- -so
- Specifies the node as a scale-out node. The setup type must be
ece
for adding this type of nodes in the cluster definition. - Node
- Specifies the node name.
- load NodeFile
- Loads the specified file containing a list of nodes, separated per line; adds the nodes
specified in the file and configures them according to the following:
- -g
- Sets the nodes as GPFS Graphical User Interface server.
- -q
- Sets the nodes as quorum nodes.
- -m
- Sets the nodes as manager nodes.
- -a
- Sets the nodes as admin nodes.
- -n
- Sets the nodes as NSD servers.
- -e
- Sets the node as the EMS node of an ESS system. This node is automatically specified as the admin node.
- -c
- Sets the nodes as call home nodes.
- -p
- Sets the nodes as protocol nodes.
- -so
- Sets the nodes as scale-out nodes. The setup type must be
ece
for adding this type of nodes in the cluster definition.
- delete Node
- Removes the specified node from the configuration. The following option is accepted.
- clear
- Clears the current node configuration. The following option is accepted:
- -f
- Forces the action without manual confirmation.
- list
- Lists the nodes configured in your environment.
- config
- Used to set properties in the cluster definition file that
will be used during install, deploy, or upgrade. This command only interacts with this configuration
file and does not directly configure these properties on the GPFS cluster. This option accepts the following arguments:
- gpfs
- Sets any of the following GPFS-specific
properties to be used during GPFS
installation and configuration:
- -l
- Lists the current settings in the configuration.
- -c ClusterName
- -p
- Specifies the profile to be set on cluster creation. The following values are accepted:
- default
- Specifies that the GpfsProtocolDefaults profile is to be used.
- randomio
- Specifies that the GpfsProtocolRandomIO profile is to be used.
- UserDefinedProfilePath
- Specifies a path to the user-defined profile of attributes to be applied. The profile file
specifies GPFS configuration parameters with values different than the documented defaults. A user-defined profile must have the following properties:
- It must not begin with the string
gpfs
. - It must have the .profile suffix.
- It must be located in the /var/mmfs/etc/ directory. Note: The installation toolkit places the user-defined profile in the /var/mmfs/etc/ from the path that you specify with the ./spectrumscale config gpfs -p PathToUserDefinedProfile command.
gpfs.base
RPM is installed on the node, a sample user-defined profile is available at this path: /usr/lpp/mmfs/samples/sample.profile. For more information, see mmcrcluster command and mmchconfig command. - It must not begin with the string
- -r RemoteShell
- Specifies the remote shell binary to be used by GPFS. If no remote shell is specified in the cluster definition file, /usr/bin/ssh will be used as the default.
- -rc RemoteFileCopy
- Specifies the remote file copy binary to be used by GPFS. If no remote file copy binary is specified in the cluster definition file, /usr/bin/scp will be used as the default.
- -e EphemeralPortRange
-
Specifies an ephemeral port range to be set on all GPFS nodes. If no port range is specified in the cluster definition, 60000-61000 is used as default.
For information about ephemeral port range, see the topic about GPFS port usage in Miscellaneous advanced administration topics.
- -pn PORT_NUMBER
- Specifies the mmcrcluster option to specify a port to use for both tscTcpPort (daemon) and mmsdrservPort (sdr/cc).
- -g GPLBIN_DIR
- Specifies the prebuilt generated gplbin package repository directory name.
- protocols
- Provides details of the GPFS environment
that will be used during protocol deployment, according to the following options:
- -l
- Lists the current settings in the configuration.
- -f FileSystem
- Specifies the file system.
- -m MountPoint
- Specifies the shared file system mount point or path.
- -e ExportIPPool
- Specifies a comma-separated list of additional CES export IP addresses to configure on the
cluster.In the CES interface mode, you must specify the CES IP addresses with the installation toolkit in the Classless Inter-Domain Routing (CIDR) notation. In the CIDR notation, the IP address is followed by a forward slash and the prefix length.
For example,IPAddress/PrefixLength
- IPv6
2001:0DB8::/32
- IPv4
192.0.2.0/20
- When you are using IPv6 addresses, prefix length must be in the range 1 - 124.
- When you are using IPv4 addresses, prefix length must be in the range 1 - 30.
- -i Interfaces
- Specifies a comma-separated list of network interfaces.
- hdfs
- Sets Hadoop Distributed File System (HDFS) related configuration in the cluster definition file:
- new
- Defines configuration for a new HDFS cluster.
- -n Name
- Specifies the name of the new HDFS cluster. Note: The name of the HDFS cluster must not contain any upper case letters.
- -nn NameNodes
- Specifies the name node host names in a comma-separated list.
- -dn DataNodes
- Specifies the data node host names in a comma-separated list.
- -f FileSystem
- Specifies the IBM Storage Scale file system name.
- -d DataDir
- Specifies the IBM
Storage Scale data directory name.
Note: You must specify only the directory name. This directory name is the directory under the root file system specified with the -f FileSystem option.
- import
- Imports an existing HDFS configuration.
- -l LocalDir
- Specifies the local directory that contains the existing HDFS configuration.
- add
- Adds name nodes or data nodes in an existing HDFS configuration.
- -n Name
- Specifies the name of the existing HDFS cluster.
- -nn NameNodes
- Specifies the name node host names in a comma-separated list.
- -dn DataNodes
- Specifies the data node host names in a comma-separated list.
- list
- Lists the current HDFS configuration.
- clear
- Clears the current HDFS configuration from the cluster definition file.
- -f
- Forces action without manual confirmation.
- -n Name
- Specifies the name of the existing HDFS cluster.
- perfmon
- Sets performance monitoring specific properties to be used during installation and configuration:
- -r on | off
-
Specifies if the install toolkit can reconfigure performance monitoring.Note: When set to on, reconfiguration might move the collector to different nodes and it might reset sensor data. Custom sensors and data might be erased.
- -d on | off
-
Specifies if performance monitoring should be disabled (not installed).Note: When set to on, pmcollector and pmsensor packages are not installed or upgraded. Existing sensor or collector state remains as is.
- -l
- Lists the current settings in the configuration.
- clear
- Removes specified properties from the cluster definition file:
- gpfs
- Removes GPFS related properties from the
cluster definition file:
- -c
- Clears the GPFS cluster name.
- -p
- Clears the GPFS profile set in the cluster definition file and sets it to
default
. - -r RemoteShell
- Clears the absolute path name of the remote shell command GPFS uses for node communication. For example, /usr/bin/ssh.
- -rc RemoteFileCopy
- Clears the absolute path name of the remote copy command GPFS uses when transferring files between nodes. For example, /usr/bin/scp.
- -e EphemeralPortRange
- Clears the GPFS daemon communication port range.
- -pn PORT_NUMBER
- Clears the mmcrcluster option to specify a port to use for both tscTcpPort (daemon) and mmsdrservPort (sdr/cc).
- -g GPLBIN_DIR
- Clears the prebuilt generated gplbin package repository directory.
- --all
- Clears all settings in the cluster definition file.
- protocols
- Removes protocols related properties from the cluster definition file:
- -f
- Clears the shared file system name.
- -m
- Clears the shared file system shared file system mount point or path.
- -e
- Clears a comma-separated list of additional CES export IPs to configure on the cluster.
- --all
- Clears all settings in the cluster definition file.
- update
- Updates operating system and CPU architecture fields in the cluster definition file. This update is automatically done if you run the upgrade precheck while upgrading to IBM Storage Scale release 4.2.2 or later.
- populate
- Populates the cluster definition file with the current
cluster state. In the following upgrade scenarios, you might need to update the cluster definition file with the current cluster state:
- A manually created cluster in which you want to use the installation toolkit to perform administration tasks on the cluster such as adding protocols, adding nodes, and upgrading.
- A cluster created using the installation toolkit in which manual changes were done without using the toolkit wherein you want to synchronize the installation toolkit with the updated cluster configuration that was performed manually.
- --node Node
- Specifies an existing node in the cluster that is used to query the cluster information. If you want to use the spectrumscale config populate command to retrieve data from a cluster containing ESS, you must specify the EMS node with the --node flag.
- setuptype
-
- find
- Finds the setup type from
ansible/ibm-spectrum-scale-install-infra/vars/scale_clusterdefinition.json. The
default setup type is
ss
.
- nsd
- Used to add, remove, or list NSDs, as well as add file systems in the
cluster definition file. This command only interacts with this
configuration file and does not directly configure NSDs on the cluster itself. The NSDs that have an
entry in the cluster definition file will be used during install. This option accepts the following arguments:
- add
- Adds an NSD to the configuration, according to the following specifications:
- -p Primary
- Specifies the primary NSD server name.
- -s Secondary
- Specifies the secondary NSD server names. You can use a comma-separated list to specify up to seven secondary NSD servers.
- -fs FileSystem
- Specifies the file system to which the NSD is assigned.
- -po Pool
- Specifies the file system pool.
- -u
- Specifies NSD usage. The following values are accepted:
- dataOnly
- dataAndMetadata
- metadataOnly
- descOnly
- localCache
- -fg FailureGroup
- Specifies the failure group to which the NSD belongs.
- --no-check
- Specifies not to check for the device on the server.
- PrimaryDevice
- Specifies the device name on the primary NSD server.
- delete NSD
- Removes the specified NSD from the configuration.
- modify NSD
- Modifies the NSD parameters on the specified NSD, according to the following options:
- -n Name
- Specifies the name.
- -u
- The following values are accepted:
- dataOnly
- dataAndMetadata
- metadataOnly
- descOnly
- localCache
- -po Pool
- Specifies the pool
- -fs FileSystem
- Specifies the file system.
- -fg FailureGroup
- Specifies the failure group.
- servers
- Adds and removes servers, and sets the primary server for NSDs.
- add -s NSDServer NSDs
- Adds an NSD server for the specified NSDs.
- remove -s NSDServer NSDs
- Removes an NSD server from the server list for the specified NSDs.
- setprimary -s NSDServer NSDs
- Sets a new primary server for the specified NSDs.
- clear
- Clears the current NSD configuration. The following option is accepted:
- -f
- Forces the action without manual confirmation.
- list
- Lists the NSDs configured in your environment.
remote_mount
Used to list, check, configure, grant, revoke, and clear remote mounts. The IBM Storage Scale GUI nodes need to be configured on both owning cluster and accessing cluster.
- -h
- Displays the help message.
- config
- Configures a remote mount.
- precheck
- Performs the pre-remote mount checks to validate the user input and environment before the grant option.
- grant
- Remote mounts a file system to another cluster.
- revoke
- Revokes a remote mounted file system from another cluster.
- clear
- Clears the remote mount configuration.
- list
- Lists the configuration of a remote mount.
- filesystem
- Used to list or modify file systems in the cluster definition file. This command only interacts with this
configuration file and does not directly modify file systems on the cluster itself. To modify the
properties of a file system in the cluster definition file, the
file system must first be added with spectrumscale nsd. This option
accepts the following arguments:
- modify
- Modifies the file system attributes. This option accepts the following arguments:
- -B
- Specifies the file system block size. This argument accepts the following values: 64K, 128K, 256K, 512K, 1M, 2M, 4M, 8M, 16M.
- -m MountPoint
- Specifies the mount point.
- -r
- Specifies the number of copies of each data block for a file. This argument accepts the following values: 1, 2, 3.
- -mr
- Specifies the number of copies of inodes and directories. This argument accepts the following values: 1, 2, 3.
- -MR
- Specifies the default maximum number of copies of inodes and directories. This argument accepts the following values: 1, 2, 3.
- -R
- Specifies the default maximum number of copies of each data block for a file. This argument accepts the following values: 1, 2, 3.
- --metadata_block_size
- Specifies the file system meta data block size. This argument accepts the following values: 64K, 128K, 256K, 512K, 1M, 2M, 4M, 8M, 16M.
- --fileauditloggingenable
- Enables file audit logging on the specified file system.
- --fileauditloggingdisable
- Disables file audit logging on the specified file system.
- --logfileset LogFileset
- Specifies the log fileset name for file audit logging. The default value is .audit_log.
- --retention RetentionPeriod
- Specifies the file audit logging retention period in number of days. The default value is 365 days.
- FileSystem
- Specifies the file system to be modified.
- define
- Adds file system attributes in an
IBM Storage Scale Erasure Code Edition
environment. The setup type must be
ece
for using this option.Note: If you are planning to deploy protocols in the IBM Storage Scale Erasure Code Edition cluster, you must define a CES shared root file system before initiating the installation toolkit deployment phase by using the following command../spectrumscale config protocols -f FileSystem -m MountPoint
- -fs FileSystem
- Specifies the file system to which the vdisk set is to be assigned.
- -vs VdiskSet
- Specifies the vdisk sets to be affected by a file system operation.
- --mmcrfs MmcrfsParams
- Specifies that all command line parameters following the --mmcrfs flag must be passed to the IBM Storage Scale mmcrfs command and they must not be interpreted by the mmvdisk command.
- list
- Lists the file systems configured in your environment.
- fileauditlogging
- Enable, disable, or list the file audit logging configuration in the cluster definition file.
- enable
- Enables the file audit logging configuration in the cluster definition file.
- disable
- Disables the file audit logging configuration in the cluster definition file.
- list
- Lists the file audit logging configuration in the cluster definition file.
- recoverygroup
- Define, undefine, change, list, or clear recovery group related configuration in the cluster definition file in an
IBM Storage Scale Erasure Code Edition environment. The setup type must be
ece
for using this option.- define
- Defines recovery groups in the cluster definition file.
- -rg RgName
- Sets the name of the recovery group.
- -nc ScaleOutNodeClassName
- Sets the name of the scale-out node class.
- --node Node
- Specifies the scale-out node within an existing IBM Storage Scale Erasure Code Edition cluster for the server node class.
- undefine
- Undefines specified recovery group from the cluster definition file.
- RgName
- The name of the recovery group that is to be undefined.
- change
- Changes the recovery group name.
- ExistingRgName
- The name of the recovery group that is to be modified.
- -rg NewRgName
- The new name of the recovery group.
- clear
- Clears the current recovery group configuration from the cluster definition file.
- -f
- Forces operation without manual confirmation.
- list
- Lists the current recovery group configuration in the cluster definition file.
- vdiskset
- Define, undefine, list, or clear vdisk set related configuration in the cluster definition file in an
IBM Storage Scale Erasure Code Edition environment. The setup type must be
ece
for using this option.- define
- Defines vdisk sets in the cluster definition file.
- -vs VdiskSet
- Sets the name of the vdisk set.
- -rg RgName
- Specifies an existing recovery group with which the defined vdisk set is to be associated.
- -code
- Defines the erasure code. This argument accepts the following values: 3WayReplication, 4WayReplication, 4+2P, 4+3P, 8+2P, and 8+3P.
- -bs
- Specifies the block size for a vdisk set definition. This argument accepts the following values: 256K, 512K, 1M, 2M, 4M, 8M, and 16M.
- -ss VdiskSetSize
- Defines the vdisk set size in percentage of the available storage space.
- undefine
- Undefines specified vdisk set from the cluster definition file.
- VdiskSet
- The name of the vdisk set that is to be undefined.
- clear
- Clears the current vdisk set configuration from the cluster definition file.
- -f
- Forces operation without manual confirmation.
- list
- Lists the current vdisk set configuration in the cluster definition file.
- callhome
- Used to enable, disable, configure, schedule, or list call home configuration in the cluster definition file.
- enable
- Enables call home in the cluster definition file.
- disable
- Disables call home in the cluster definition file. The call home function is enabled by default in the cluster definition file. If you disable it in the cluster definition file, the call home packages are installed on the nodes but no configuration is done by the installation toolkit.
- config
- Configures call home settings in the cluster definition file.
- -n CustName
- Specifies the customer name for the call home configuration.
- -i CustID
- Specifies the customer ID for the call home configuration.
- -e CustEmail
- Specifies the customer email address for the call home configuration.
- -cn CustCountry
- Specifies the customer country code for the call home configuration.
- -s ProxyServerIP
- Specifies the proxy server IP address for the call home configuration. This is an optional
parameter.
If you are specifying the proxy server IP address, the proxy server port must also be specified.
- -pt ProxyServerPort
- Specifies the proxy server port for the call home configuration. This is an optional
parameter.
If you are specifying the proxy server port, the proxy server IP address must also be specified.
- -u ProxyServerUserName
- Specifies the proxy server user name for the call home configuration. This is an optional parameter.
- -pw ProxyServerPassword
- Specifies the proxy server password for the call home configuration. This is an optional
parameter.
If you do not specify a password on the command line, you are prompted for a password.
- -a
- When you specify the call home configuration settings by using the ./spectrumscale
callhome config, you are prompted to accept or decline the support information collection
message. Use the
-a
parameter to accept that message in advance. This is an optional parameter.If you do not specify the
-a
parameter on the command line, you are prompted to accept or decline the support information collection message.
- Clear
- Clears the specified call home settings from the cluster definition file.
- --all
- Clears all call home settings from the cluster definition file.
- -n
- Clears the customer name from the call home configuration in the cluster definition file.
- -i
- Clears the customer ID from the call home configuration in the cluster definition file.
- -e
- Clears the customer email address from the call home configuration in the cluster definition file.
- -cn
- Clears the customer country code from the call home configuration in the cluster definition file.
- -s
- Clears the proxy server IP address from the call home configuration in the cluster definition file.
- -pt
- Clears the proxy server port from the call home configuration in the cluster definition file.
- -u
- Clears the proxy server user name from the call home configuration in the cluster definition file.
- -pw
- Clears the proxy server password from the call home configuration in the cluster definition file.
- schedule
- Specifies the call home data collection schedule in the cluster definition file.
By default, the call home data collection is enabled in the cluster definition file and it is set for a daily and a weekly schedule. Daily data uploads are by default executed at 02:xx AM each day. Weekly data uploads are by default executed at 03:xx AM each Sunday. In both cases, xx is a random number from 00 to 59. You can use the spectrumscale callhome schedule command to set either a daily or a weekly call home data collection schedule.
- -d
- Specifies a daily call home data collection schedule.
If call home data collection is scheduled daily, data uploads are executed at 02:xx AM each day. xx is a random number from 00 to 59.
- -w
- Specifies a weekly call home data collection schedule.
If call home data collection is scheduled weekly, data uploads are executed at 03:xx AM each Sunday. xx is a random number from 00 to 59.
- -c
- Clears the call home data collection schedule in the cluster definition file.
The call home configuration can still be applied without a schedule being set. In that case, you either need to manually run and upload data collections or you can set the call home schedule to the desired interval at a later time with Daily: ./spectrumscale callhome schedule -d, Weekly: ./spectrumscale callhome schedule -w, or Both Daily and Weekly: ./spectrumscale callhome schedule -d -w commands.
- list
- Lists the call home configuration specified in the cluster definition file.
- enable
- Used to enable S3, SMB, HDFS, or NFS in the cluster definition file. This command only interacts with this
configuration file and does not directly enable any protocols on the GPFS cluster itself. The default configuration is that all
protocols are disabled. If a protocol is enabled in the cluster definition file, this protocol will be enabled on the GPFS cluster during deploy. This option accepts
the following arguments:
- s3
- Specifies the protocols to be enabled or disabled. Enable or disable one of the protocols, or enable or disable multiple protocols that are separated by a space.
- nfs
- NFS
- hdfs
- HDFS
- smb
- SMB
- disable
- Used to disable S3, SMB, HDFS, or NFS in the cluster definition file. This command only interacts with this
configuration file and does not directly disable any protocols on the GPFS cluster itself. The default configuration is that all
protocols are disabled, so this command is only necessary if a protocol has previously been enabled
in the cluster definition file, but is no longer required. Note: Disabling a protocol in the cluster definition will not disable this protocol on the GPFS cluster during a deploy, it merely means that this protocol will not be enabled during a deploy.
This option accepts the following arguments:
- s3
-
Specify the protocols to be enable or disable; either enable or disable one of the protocols, or enable or disable multiple protocols separated by space.
- nfs
- NFS
- hdfs
- HDFS
- smb
- SMB
- install
- Installs, creates a GPFS cluster, creates
NSDs and adds nodes to an existing GPFS
cluster. The installation toolkit will use the environment details
in the cluster definition file to perform these tasks. If all
configuration steps have been completed, this option can be run with no arguments (and pre-install
and post-install checks will be performed automatically). For a
dry-run,
the following arguments are accepted:- -pr
- Performs a pre-install environment check.
- -po
- Performs a post-install environment check.
- -s SecretKey
- Specifies the secret key on the command line required to decrypt sensitive data in the cluster definition file and suppresses the prompt for the secret
key.
If you do not specify a secret key, the installation toolkit uses the encryption secret stored in the cluster definition file for decryption. By using this option, you can override the encryption secret stored in the cluster definition file that is generated by using the --storesecret option with the ./spectrumscale setup command.
- -f
- Forces action without manual confirmation.
- --skip
- Bypasses the specified precheck and suppresses prompts. For example, specifying --skip ssh bypasses the SSH connectivity check.
- deploy
- Deploys protocols on an existing GPFS cluster. The installation toolkit uses
the environment details in the cluster definition file to
perform these tasks. If all configuration steps are completed, this option can be run with no
arguments (and pre-deploy and post-deploy checks are performed automatically). However, the secret
key is prompted for unless it is passed as an argument by using the -s
flag.
For a
dry-run,
the following arguments are accepted:- -pr
- Performs a pre-deploy environment check.
- -po
- Performs a post-deploy environment check.
- -s SecretKey
- Specifies the secret key on the command line required to decrypt sensitive data in the cluster
definition file and suppresses the prompt for the secret key.
If you do not specify a secret key, the installation toolkit uses the encryption secret stored in the cluster definition file for decryption. By using this option, you can override the encryption secret stored in the cluster definition file that is generated by using the --storesecret option with the ./spectrumscale setup command.
- -f
- Forces action without manual confirmation.
- --skip
- Bypasses the specified precheck and suppresses prompts. For example, specifying --skip ssh bypasses the SSH connectivity check.
- upgrade
- Performs upgrade procedure, upgrade precheck, upgrade postcheck, and upgrade related
configuration to add nodes as offline, or exclude nodes from the upgrade run.
- precheck
- Performs health checks on the cluster prior to the upgrade.During the upgrade precheck, the installation toolkit displays messages in a number of scenarios including:
- If there are AFM relationships in the cluster. All file systems that have associated AFM primary or cache filesets are listed and reference to procedure for stopping and restarting replication is provided.
- config
- Manage upgrade related configuration in the cluster definition file.
- offline
- Designates specified nodes in the cluster as offline for the upgrade run.
For entities designated as offline, only the packages are upgraded during the upgrade; the services
are not restarted after the upgrade. You can use this option to designate those
nodes as offline that have services down or
stopped, or that have unhealthy components that are flagged in the upgrade precheck.
- -N { Node | all }
- You can specify one or more nodes that are a part of the cluster that is being upgraded, with -N
in a comma-separated list. For example:
node1,node2,node3
You can also specify all nodes with the -N all option.
If the nodes being specified as offline are protocol nodes then all components (GPFS, SMB, NFS, and S3) are added as offline in the cluster configuration. If the nodes being specified as offline are not protocol nodes then GPFS is added as offline in the cluster configuration.
- --clear
- Clears the offline nodes information from the cluster configuration.
- exclude
- Designates specified nodes in a cluster to be excluded from the upgrade run. For nodes
designated as excluded, the installation toolkit does not perform any action during the upgrade.
This option allows you to upgrade a subset of a cluster.Note: Nodes that are designated as excluded must be upgraded at a later time to complete the cluster upgrade.
- -N Node
- You can specify one or more nodes that are a part of the cluster that is being upgraded with -N
in a comma-separated list. For example:
node1,node2,node3
- --clear
- Clears the excluded nodes information from the cluster configuration.
- workloadprompt
- Enables prompt to users to shut down their workloads before an upgrade on the specified nodes.
- -N { node1,node2,node3,… | all }
- Specifies the nodes on which to enable the prompt to users to shut down their workloads before
an upgrade.
- --clear
- Clears the workload prompt configuration for the specified nodes from the upgrade configuration.
- --list
- Lists the nodes on which workload prompt is enabled in the upgrade configuration.
- --clear
- Clears the workload prompt configuration for all nodes from the upgrade configuration.
- list
- Lists the upgrade related configuration information in the cluster definition file.
- clear
- Clears the upgrade related configuration in the cluster definition file.
- run
- Upgrades components of an existing IBM Spectrum Scale cluster.
This command can still be used even if all protocols are not enabled. If a protocol is not enabled, then the respective packages are still upgraded, but the respective service is not started. The installation toolkit uses environment details in the cluster definition file to perform upgrade tasks.
The installation toolkit includes the ability to determine if an upgrade is being run for the first time or if it is a rerun of a failed upgrade.
To perform environment health checks prior to and after the upgrade, run the ./spectrumscale upgrade command using the
precheck
andpostcheck
arguments. This is not required, however, because specifying upgrade run with no arguments also runs these checks.- --skip
- Bypasses the specified precheck and suppresses prompts. For example, specifying --skip ssh bypasses the SSH connectivity check.
- postcheck
- Performs health checks on the cluster after the upgrade has been completed.
- showversions
- Shows installed versions of GPFS and protocols and available versions of these components in the configured repository.
- scaleadmd
-
Use this parameter to enable installation of the Native REST API by using the toolkit.
- enable
- Enables the toolkit to install the admin-daemon package of the Native REST API and to create the native Rest API cluster.
- nodeid
- Use this parameter to define the node TLS certificate when you use the toolkit to install the
Native REST API.
- define
- Configures the options that are needed to define the node TLS certificate .
- --cert
- Use it to define the path to the TLS certificate.
- --key
- Use it to define the path to the key that is associated with the TLS certificate.
- --chain
- Use it to define the path to the certificate authority chain that is associated with the TLS certificate.
Exit status
- 0
- Successful completion.
- nonzero
- A failure has occurred.
Security
You must have root authority to run the spectrumscale command.
The node on which the command is issued must be able to execute remote shell commands on any other node in the cluster without the use of a password and without producing any extraneous messages. For more information, see Requirements for administering a GPFS file system.
Examples
Creating a new IBM Storage Scale cluster
- To designate your installer node, issue this
command:
spectrumscale setup -s 192.168.0.1
- To designate NSD server nodes in your environment to use for the installation, issue this
command:
./spectrumscale node add FQDN -n
- To add four non-shared NSDs seen by a primary NSD server only, issue this
command:
./spectrumscale nsd add -p FQDN_of_Primary_NSD_Server /dev/dm-1 /dev/dm-2 /dev/dm-3 /dev/dm-4
- To add four non-shared NSDs seen by both a primary NSD server and a secondary NSD server, issue
this
command:
./spectrumscale nsd add -p FQDN_of_Primary_NSD_Server -s FQDN_of_Secondary_NSD_Server /dev/dm-1 /dev/dm-2 /dev/dm-3 /dev/dm-4
- To define a shared root file system using two NSDs and a file system fs1
using two NSDs, issue these
commands:
./spectrumscale nsd list ./spectrumscale filesystem list ./spectrumscale nsd modify nsd1 -fs cesSharedRoot ./spectrumscale nsd modify nsd2 -fs cesSharedRoot ./spectrumscale nsd modify nsd3 -fs fs1 ./spectrumscale nsd modify nsd4 -fs fs1
- To designate GUI nodes in your environment to use for the installation, issue this
command:
./spectrumscale node add FQDN -g -a
- To designate additional client nodes in your environment to use for the installation, issue this
command:
./spectrumscale node add FQDN
- To allow the installation toolkit to reconfigure Performance Monitoring if it detects any
existing configurations, issue this
command:
./spectrumscale config perfmon -r on
- To name your cluster, issue this
command:
./spectrumscale config gpfs -c Cluster_Name
- To configure the call home function with the mandatory parameters,
issue this
command:
./spectrumscale callhome config -n username -i 456123 -e username@example.com -cn US
If you do not want to use call home, disable it by issuing the following command:For more information, see Enabling and configuring call home using the installation toolkit../spectrumscale callhome disable
- To review the configuration prior to installation, issue these
commands:
./spectrumscale node list ./spectrumscale nsd list ./spectrumscale filesystem list ./spectrumscale config gpfs --list
- To start the installation on your defined environment, issue these
commands:
./spectrumscale install --precheck ./spectrumscale install
Deploying protocols on an existing cluster
- To designate your installer node, issue this
command:
spectrumscale setup -s 192.168.0.1
- To designate protocol nodes in your environment to use for the deployment, issue this
command:
./spectrumscale node add FQDN -p
- To designate GUI nodes in your environment to use for the deployment, issue this
command:
./spectrumscale node add FQDN -g -a
- To configure protocols to point to a file system that will be used as a shared root, issue this
command:
./spectrumscale config protocols -f FS_Name -m Shared_FS_Mountpoint_Or_Path
- To configure a pool of export IPs, issue this
command:
./spectrumscale config protocols -e Comma_Separated_List_of_Exportpool_IPs
- If you are using the CES interface mode, to set the network
interfaces, issue this
command:
./spectrumscale config protocols -i INTERFACE1,INTERFACE2,...
- To enable NFS on all protocol nodes, issue this
command:
./spectrumscale enable nfs
- To enable SMB on all protocol nodes, issue this
command:
./spectrumscale enable smb
- To enable S3 on all protocol nodes, issue these
commands:
The output is as follows:./spectrumscale enable s3
[ INFO ] Enabling S3 on all protocol nodes. [ INFO ] Tip :If all node designations and any required protocol configurations are complete, proceed to check the installation configuration:./spectrumscale deploy --precheck
- To enable file audit logging, issue the following
command:
./spectrumscale fileauditlogging enable
For more information, see Enabling and configuring file audit logging using the installation toolkit.
- To review the configuration prior to deployment, issue these
commands:
./spectrumscale config protocols ./spectrumscale node list
- To deploy protocols on your defined environment, issue these
commands:
./spectrumscale deploy --precheck ./spectrumscale deploy
Upgrading an IBM Storage Scale cluster
- Extract the IBM
Storage Scale package for the
required code level by issuing a command similar to the following depending on the package
name:
./Spectrum_Scale_Standard-5.2.x.x-xxxxx
- Populate the cluster definition file with the current
cluster state by issuing the following
command:
./spectrumscale config populate
- [Optional] Enable the prompt to users to shut down their workloads before starting the
upgrade by issuing the following
command.
./spectrumscale upgrade config workloadprompt -N all
- Run the upgrade precheck from the installer directory of the latest code level extraction by
issuing commands similar to the
following:
cd /usr/lpp/mmfs/Latest_Code_Level_Directory/ansible-toolkit ./spectrumscale upgrade precheck
- [Optional] Specify nodes as
offline by issuing the following command, if services running on these nodes are stopped or
down.
./spectrumscale upgrade config offline -N Node
- [Optional] Exclude nodes that you do not want to upgrade at
this point by issuing the following
command.
./spectrumscale upgrade config exclude -N Node
- Run the upgrade by issuing this
command:
cd /usr/lpp/mmfs/Latest_Code_Level_Directory/ansible-toolkit ./spectrumscale upgrade run
Adding to an installation process
- To add nodes to an installation, do the following:
- Add one or more node types using the following commands:
- Client nodes:
./spectrumscale node add FQDN
- NSD nodes:
./spectrumscale node add FQDN -n
- Protocol nodes:
./spectrumscale node add FQDN -p
- GUI nodes:
./spectrumscale node add FQDN -g -a
- Client nodes:
- Install GPFS on the new nodes using the following
commands:
./spectrumscale install --precheck ./spectrumscale install
- If protocol nodes are being added, deploy protocols using the following
commands:
./spectrumscale deploy --precheck ./spectrumscale deploy
- Add one or more node types using the following commands:
- To add NSDs to an installation, do the following:
- Verify that the NSD server connecting this new disk runs an OS compatible with the installation toolkit and that the NSD server exists within the cluster.
- Add NSDs to the installation using the following
command:
./spectrumscale nsd add -p FQDN_of_Primary_NSD_Server Path_to_Disk_Device_File
- Run the installation using the following
commands:
./spectrumscale install --precheck ./spectrumscale install
- To add file systems to an installation, do the following:
- Verify that free NSDs exist and that they can be listed by the installation toolkit using the following
commands.
mmlsnsd ./spectrumscale nsd list
- Define the file system using the following
command:
./spectrumscale nsd modify NSD -fs File_System_Name
- Add the file system by using the following
commands:
./spectrumscale install --precheck ./spectrumscale install
- Verify that free NSDs exist and that they can be listed by the installation toolkit using the following
commands.
- To enable another protocol on an existing cluster that has protocols enabled, do the following
steps depending on your configuration:
- Enable NFS on all protocol nodes using the following
command:
./spectrumscale enable nfs
- Enable SMB on all protocol nodes using the following
command:
./spectrumscale enable smb
- Enable the new protocol using the following
commands:
./spectrumscale deploy --precheck ./spectrumscale deploy
- Enable NFS on all protocol nodes using the following
command:
Using the installation toolkit in cluster containing ESS
- Add protocol nodes in the ESS cluster by issuing the following
command.
You can add other types of nodes such as client nodes, NSD servers, and so on depending on your requirements. For more information, see Defining the cluster topology for the installation toolkit../spectrumscale node add NodeName -p
- Specify one of the newly added protocol nodes as the installer node and specify the setup type
as
ess
by issuing the following command../spectrumscale setup -s NodeIP -i SSHIdentity -st ess
The installer node is the node on which the installation toolkit is extracted and from where the installation toolkit command, spectrumscale, is initiated.
- Specify the EMS node of the ESS system to the installation toolkit by issuing the following
command.
./spectrumscale node add NodeName -e
This node is also automatically specified as the admin node. The admin node, which must be the EMS node in an ESS configuration, is the node that has access to all other nodes to perform configuration during the installation.
- Proceed with specifying other configuration options, installing, and deploying by using the installation toolkit. For more information, see Defining the cluster topology for the installation toolkit, Installing IBM Storage Scale and creating a cluster, and Deploying protocols.
Manually adding protocols to a cluster containing ESS
For information on preparing a cluster that contains ESS for deploying protocols, see Preparing a cluster that contains ESS for adding protocols.
After you have prepared your cluster that contains ESS for adding protocols, you can use commands similar to the ones listed in the Deploying protocols on an existing cluster section.
- Specify the installer node and the setup type as
ece
in the cluster definition file for IBM Storage Scale Erasure Code Edition../spectrumscale setup -s InstallerNodeIP -st ece
- Add scale-out nodes for
IBM Storage Scale Erasure Code Edition in the cluster definition file.
./spectrumscale node add NodeName -so
- Define the recovery group for
IBM Storage Scale Erasure Code Edition in the
cluster definition file.
./spectrumscale recoverygroup define -N Node1,Node2,...,NodeN
- Define vdisk sets for
IBM Storage Scale Erasure Code Edition in the cluster definition file.
./spectrumscale vdiskset define -rg RgName -code RaidCode -bs BlockSize -ss SetSize
- Define the file system for
IBM Storage Scale Erasure Code Edition in the
cluster definition file.
./spectrumscale filesystem define -fs FileSystem -vs VdiskSet
- Enable HDFS in the cluster definition file.
./spectrumscale enable hdfs
- Set the CES IP
addresses.
./spectrumscale config protocols -e CES_IP1,CES_IP2,...,CES_IPN
- Set up the CES shared root file
system.
./spectrumscale config protocols -f cesSharedRoot -m /ibm/cesSharedRoot
- Define the properties for the new HDFS
cluster.
./spectrumscale config hdfs new -n hdfscluster1 -nn namenode1,namenode2 -dn datanode1,datanode2 -f fs1 -d DataDir
- Run the deployment precheck
procedure.
./spectrumscale deploy --precheck
- Run the deployment procedure.
./spectrumscale deploy
- Define the properties for the new HDFS
cluster.
./spectrumscale config hdfs add -n hdfscluster2 -nn namenode3 -dn datanode3
- Run the deployment precheck
procedure.
./spectrumscale deploy --precheck
- Run the deployment procedure.
./spectrumscale deploy
- Define the properties for the new HDFS
cluster.
./spectrumscale config hdfs import -l hdfs_config_dir
- Run the deployment precheck
procedure.
./spectrumscale deploy --precheck
- Run the deployment procedure.
./spectrumscale deploy
Configuring remote mount by using the installation toolkit

- Go to the IBM Storage Scale installation toolkit
directory.
cd /usr/lpp/mmfs/5.1.9.0/ansible-toolkit/
- Issue the spectrumscale setup command to set up the new installer
node.
./spectrumscale setup -s InstallerNodeIP
- Issue the spectrumscale config populate command to populate the
cluster definition file.
./spectrumscale config populate -N node
- Define the spectrumscale remote mount configuration. You can use the
--help option for command
details:
./spectrumscale remote_mount config --help -client-gui-username CLIENTGUIUSERNAME, --clientGuiUsername CLIENTGUIUSERNAME Add the client gui username. -client-gui-password, --clientGuiPassword Add the client gui password. -client-gui-hostname CLIENTGUIHOSTNAME, --clientGuiHostname CLIENTGUIHOSTNAME Add the client gui hostname. -storage-gui-username STORAGEGUIUSERNAME, --storageGuiUsername STORAGEGUIUSERNAME Add the storage gui username. -storage-gui-password, --storageGuiPassword Add the storage gui password. -storage-gui-hostname STORAGEGUIHOSTNAME, --storageGuiHostname STORAGEGUIHOSTNAME Add the storage gui hostname. -remotemount-path REMOTEMOUNTPATH, --remotemountPath REMOTEMOUNTPATH Add the remote mount filesystem path. -client-filesystem CLIENTFILESYSTEM, --clientFilesystem CLIENTFILESYSTEM Add the remote mount client filesystem name. -storage-filesystem STORAGEFILESYSTEM, --storageFilesystem STORAGEFILESYSTEM Add the remote mount storage filesystem name.
- Run the spectrumscale remote mount precheck command to validate the environment.
./spectrumscale remote_mount precheck
- Run the spectrumscale remote mount grant command to establish remote mount filesystem
configuration.
./spectrumscale remote_mount grant
- Run the remote mount revoke process if required by using the following command:
./spectrumscale remote_mount revoke

Diagnosing an error during install, deploy, or upgrade
- Note the screen output indicating the error. This helps in narrowing down the general
failure.
When a failure occurs, the screen output also shows the log file containing the failure.
- Open the log file in an editor such as
vi
. - Go to the end of the log file and search upwards for the text
FATAL
. - Find the topmost occurrence of FATAL (or first FATAL error that occurred) and look above and below this error for further indications of the failure.
For more information, see Finding deployment related error messages more easily and using them for failure analysis.
See also
Location
/usr/lpp/mmfs/5.1.4.x/ansible-toolkit