spectrumscale command

Installs and configures GPFS™; adds nodes to a cluster; deploys and configures protocols, performance monitoring tools, and authentication services; configures call home and file audit logging; and upgrades GPFS and protocols.

Synopsis

spectrumscale setup [-i SSHIdentity] [-s ServerIP] [-st Start of change{"ss","SS","ess","ESS"}End of change] [--storesecret]

or

spectrumscale node add [-g] [-q] [-m] [-a] [-n] [-e] [-c] [-p [ExportIP]] Node

or

spectrumscale node load [-g] [-q] [-m] [-a] [-n] [-e] [-c] [-p] NodeFile

or

spectrumscale node delete [-f] Node

or

spectrumscale node clear [-f]

or

spectrumscale node list

or

spectrumscale config gpfs [-l] [-c ClusterName] [-p {default | randomio}]
                          [-r RemoteShell] [-rc RemoteFileCopy]
                          [-e EphemeralPortRange]

or

spectrumscale config protocols [-l] [-f FileSystem] [-m MountPoint] [-e ExportIPPool]

or

spectrumscale config object [-f FileSystem] [-m MountPoint][-e EndPoint] [-o ObjectBase]
                            [-i InodeAllocation] [-t AdminToken]
                            [-au AdminUser] [-ap AdminPassword]
                            [-su SwiftUser] [-sp SwiftPassword]
                            [-dp DatabasePassword]
                            [-mr MultiRegion] [-rn RegionNumber]
                            [-s3 {on | off}] 

or

spectrumscale config perfmon [-r {on | off}] [-d {on | off}] [-l]

or

spectrumscale config ntp [-e {on | off} [-l List ][-s Upstream_Servers]]

or

spectrumscale config clear {gpfs | protocols | object}

or

spectrumscale config update

or

spectrumscale config populate --node Node

or

spectrumscale nsd add -p Primary [-s Secondary] [-fs FileSystem]
                      [-po Pool]
                      [-u {dataOnly | dataAndMetadata | metaDataOnly | descOnly | localCache}]
                      [-fg FailureGroup] [--no-check]
                      PrimaryDevice [PrimaryDevice ...]

or

spectrumscale nsd balance [--node Node | --all]

or

spectrumscale nsd delete NSD

or

spectrumscale nsd modify [-n Name]
                         [-u {dataOnly | dataAndMetadata | metadataOnly | descOnly | localCache}]
                         [-po Pool] [-fs FileSystem] [-fg FailureGroup]
                         NSD

or

spectrumscale nsd servers 

or

spectrumscale nsd clear [-f]

or

spectrumscale nsd list 

or

spectrumscale filesystem modify [-B {64K | 128K | 256K | 512K | 1M | 2M | 4M | 8M | 16M}] [-m MountPoint]
                                [-r {1 | 2 | 3}] [-mr {1 | 2 | 3}] [-MR {1 | 2 | 3}] [-R {1 | 2 | 3}]
                                [--metadata_block_size {64K | 128K | 256K | 512K | 1M | 2M | 4M | 8M | 16M}]
                                [--fileauditloggingenable] [--fileauditloggingdisable]
                                [--logfileset LogFileset] [--retention RetentionPeriod]
                                FileSystem

or

spectrumscale filesystem list

or

spectrumscale fileauditlogging enable

or

spectrumscale fileauditlogging disable

or

spectrumscale fileauditlogging list

or

spectrumscale callhome enable

or

spectrumscale callhome disable

or

spectrumscale callhome config  -n CustomerName -i CustomerID -e CustomerEmail -cn CustomerCountry
                              [-s ProxyServerIP] [-pt ProxyServerPort]
                              [-u ProxyServerUserName] [-pw ProxyServerPassword] [-a]

or

spectrumscale callhome clear  {--all | -n | -i | -e | -cn | -s | -pt | -u | -pw}

or

spectrumscale callhome schedule  {-d | -w} Start of change[-c]End of change

or

spectrumscale callhome list

or

spectrumscale auth file {ldap | ad | nis | none}  

or

spectrumscale auth  object [--https] {local | external | ldap | ad}  

or

spectrumscale auth  commitsettings  

or

spectrumscale auth  clear  

or

spectrumscale enable {object | nfs | smb} 

or

spectrumscale disable {object | nfs | smb} 
CAUTION:
Disabling object service discards OpenStack Swift configuration and ring files from the CES cluster. If OpenStack Keystone configuration is configured locally, disabling object storage also discards the Keystone configuration and database files from the CES cluster. However, the data is not removed. For subsequent object service enablement with a clean configuration and new data, remove object store fileset and set up object environment. See the mmobj swift base command. For more information, contact the IBM® Support Center.

or

spectrumscale install [-pr] [-po] [-s] [-f]

or

spectrumscale deploy [-pr] [-po] [-s] [-f]

or

spectrumscale upgrade [-pr  | -po | -ve] [-f]                

or

spectrumscale installgui {start  | stop | status}

Availability

Available with IBM Spectrum Scale™ Standard Edition or higher. The spectrumscale command (also called the installation toolkit) is available only in the protocols packages.

Description

Use the spectrumscale command (the installation toolkit) to do the following:
  • Install and configure GPFS.
  • Add GPFS nodes to an existing cluster.
  • Deploy and configure SMB, NFS, OpenStack Swift, and performance monitoring tools on top of GPFS.
  • Configure authentication services for protocols.
  • Enable and configure the file audit logging function.
  • Enable and configure the call home function.
  • Upgrade GPFS and protocols.
Note: The following prerequisites and assumptions apply:
  • The installation toolkit requires the following package:
    • python-2.7
  • TCP traffic from the nodes should be allowed through the firewall to communicate with the install toolkit on port 8889 for communication with the chef zero server and port 10080 for package distribution.
  • The nodes themselves have external Internet access or local repository replicas that can be reached by the nodes to install necessary packages (dependency installation). For more information, see the Repository setup section of the Installation prerequisites topic in IBM Spectrum Scale: Concepts, Planning, and Installation Guide.
  • To install protocols, there must a GPFS cluster running a minimum version of 4.1.1.0 with CCR enabled.
  • The node that you plan to run the installation toolkit from must be able to execute remote shell commands on any other node in the cluster without the use of a password and without producing any extraneous messages.
  • Any node that is set up to be a call home node must have network connectivity to IBM Support to upload data.
The installation toolkit performs verification during the precheck phase to ensure that passwordless SSH is set up correctly. This verification includes:
  • Check whether passwordless SSH is set up between all admin nodes and all the other nodes in the cluster. If this check fails, a fatal error occurs.
  • Check whether passwordless SSH is set up between all protocol nodes and all the other nodes in the cluster. If this check fails, a warning is displayed.
  • Check whether passwordless SSH is set up between all protocol nodes in the cluster. If this check fails, a fatal error occurs.

Parameters

setup
Installs Chef and its components, as well as configure the install node in the cluster definition file. The IP address passed in should be the node from which the installation toolkit will be run. The SSH key passed in should be the key the installer should use to have passwordless SSH onto all other nodes. This is the first command you will run to set up IBM Spectrum Scale. This option accepts the following arguments:
-i SSHIdentity
Adds the path to the SSH identity file into the configuration.
-s ServerIP
Adds the control node IP into the configuration.
-st Start of change{"ss","SS","ess","ESS"}End of change
Specifies the setup type.

If using the installation toolkit in a cluster containing ESS, specify the setup type as ess. The allowed values are ess and ss. The setup type ss specifies an IBM Spectrum Scale cluster containing no ESS nodes. The default value is ss.

Regardless of the mode, the installation toolkit contains safeguards to prevent changing of a tuned ESS configuration. While adding a node to the installation toolkit, it looks at whether the node is currently in an existing cluster and if so, it checks the node class. ESS I/O server nodes are detected based upon existence within the gss_ppc64 node class. ESS EMS nodes are detected based upon existence within the ems node class. ESS I/O server nodes are not allowed to be added to the installation toolkit and must be managed by the ESS toolsets contained in the EMS node. A single ESS EMS node is allowed to be added to the installation toolkit. Doing so adds this node as an admin node of the installation toolkit functions. While the installation toolkit runs from a non-ESS node, it uses the designated admin node (an EMS node in this case) to run mm commands on the cluster as a whole. Once in the ESS mode, the following assumptions and restrictions apply:
  • File audit logging is not configurable using the installation toolkit.
  • Call home is not configurable using the installation toolkit
  • EMS node will be the only admin node designated in the installation toolkit. This designation will automatically occur when the EMS node is added.
  • EMS node will be the only GUI node allowed in the installation toolkit. Additional existing GUI nodes can exist but they cannot be added.
  • EMS node will be the only performance monitoring collector node allowed within the installation toolkit. Additional existing collectors can exist but they cannot be added.
  • EMS node cannot be designated as an NSD or a protocol node.
  • I/O server nodes cannot be added to the installation toolkit. These nodes must be managed outside the installation toolkit by ESS toolsets contained in the EMS node.
  • NSDs and file systems managed by the I/O server nodes cannot be added to the installation toolkit.
  • File systems managed by the I/O server nodes can be used for placement of the Object fileset as well as CESSharedRoot file system. Simply point the installation toolkit to the path.
  • The cluster name is set upon addition of the EMS node to the installation toolkit. It is determined by mmlscluster being run from the EMS node.
  • EMS node must have passwordless SSH set up to all nodes, including any protocol, NSD, and client nodes being managed by the installation toolkit.
  • EMS node can be a different architecture or operating system than the protocol, NSD, and client nodes being managed by the installation toolkit.
  • If the config populate function is used, an EMS node of a different architecture or operating system than the protocol, NSD, and client nodes can be used.
  • If the config populate function is used, a mix of architectures within the non-ESS nodes being added or currently within the cluster cannot be used. To handle this case, use the installation toolkit separately for each architecture grouping. Run the installation toolkit from a node with similar architecture to add the required nodes. Add the EMS node and use the setup type ess.
--storesecret
Disables the prompts for the encryption secret.
CAUTION:
If you use this option, passwords will not be securely stored.

This is the first command to run to set up IBM Spectrum Scale.

node
Used to add, remove, or list nodes in the cluster definition file. This command only interacts with this configuration file and does not directly configure nodes in the cluster itself. The nodes that have an entry in the cluster definition file will be used during install, deploy, or upgrade. This option accepts the following arguments:
add Node
Adds the specified node and configures it according to the following arguments:
-g
Adds GPFS Graphical User Interface servers to the cluster definition file.
-q
Configures the node as a quorum node.
-m
Configures the node as a manager node.
-a
Configures the node as an admin node.
-n
Specifies the node as NSD.
-e
Specifies the node as the EMS node of an ESS system. This node is automatically specified as the admin node.
-c
Specifies the node as a call home node.
-p [ExportIP]
Configures the node as a protocol node and optionally assigns it an IP.
Node
Specifies the node name.
load NodeFile
Loads the specified file containing a list of nodes, separated per line; adds the nodes specified in the file and configures them according to the following:
-g
Sets the node to as a GPFS Graphical User Interface server.
-q
Sets the node as a quorum node.
-m
Sets the node as a manager node.
-a
Sets the node as an admin node.
-n
Sets the node as an NSD server.
-e
Sets the node as the EMS node of an ESS system. This node is automatically specified as the admin node.
-c
Sets the node as a call home node.
-p
Sets the node as a protocol node.
delete Node
Removes the specified node from the configuration. The following option is accepted.
-f
Forces the action without manual confirmation.
clear
Clears the current node configuration. The following option is accepted:
-f
Forces the action without manual confirmation.
list
Lists the nodes configured in your environment.
config
Used to set properties in the cluster definition file that will be used during install, deploy, or upgrade. This command only interacts with this configuration file and does not directly configure these properties on the GPFS cluster. This option accepts the following arguments:
gpfs
Sets any of the following GPFS-specific properties to be used during GPFS installation and configuration:
-l
Lists the current settings in the configuration.
-c ClusterName
-p
Specifies the profile to be set on cluster creation. The following values are accepted:
default
Specifies that the GpfsProtocolDefaults profile is to be used.
randomio
Specifies that the GpfsProtocolRandomIO profile is to be used.
-r RemoteShell
Specifies the remote shell binary to be used by GPFS. If no remote shell is specified in the cluster definition file, /usr/bin/ssh will be used as the default.
-rc RemoteFileCopy
Specifies the remote file copy binary to be used by GPFS. If no remote file copy binary is specified in the cluster definition file, /usr/bin/scp will be used as the default.
-e EphemeralPortRange

Specifies an ephemeral port range to be set on all GPFS nodes. If no port range is specified in the cluster definition, 60000-61000 will be used as default.

For information about ephemeral port range, see the topic about GPFS port usage in Miscellaneous advanced administration topics.

protocols
Provides details of the GPFS environment that will be used during protocol deployment, according to the following options:
-l
Lists the current settings in the configuration.
-f FileSystem
Specifies the file system.
-m MountPoint
Specifies the mount point.
-e ExportIPPool
Specifies a comma-separated list of additional CES export IPs to configure on the cluster.
object
Sets any of the following Object-specific properties to be used during Object deployment and configuration:
-l
Lists the current settings in the configuration.
-f FileSystem
Specifies the file system.
-m MountPoint
Specifies the mount point.
-e EndPoint
Specifies the host name that will be used for access to the object store. This should be a round-robin DNS entry that maps to all CES IP addresses or the address of a load balancer front end; this will distribute the load of all keystone and object traffic that is routed to this host name. Therefore the endpoint is an IP address in a DNS or in a load balancer that maps to a group of export IPs (that is, CES IPs that were assigned on the protocol nodes).
-o ObjectBase
Specifies the object base.
-i InodeAllocation
Specifies the inode allocation.
-t AdminToken
Specifies the admin token.
-au AdminUser
Specifies the user name for the admin.
-ap AdminPassword
Specifies the admin user password. This credential is for the Keystone administrator. This user can be local or on remote authentication server based on the authentication type used.
Note: You will be prompted to enter a Secret Encryption Key which will be used to securely store the password. Choose a memorable pass phrase which you will be prompted for each time you enter the password.
-su SwiftUser
Specifies the Swift user name. This credential is for the Swift services administrator. All Swift services are run in this user's context. This user can be local or on remote authentication server based on the authentication type used.
-sp SwiftPassword
Specifies the Swift user password.
Note: You will be prompted to enter a Secret Encryption Key which will be used to securely store the password. Choose a memorable pass phrase which you will be prompted for each time you enter the password.
-dp DataBasePassword
Specifies the object database user password.
Note: You will be prompted to enter a Secret Encryption Key which will be used to securely store the password. Choose a memorable pass phrase which you will be prompted for each time you enter the password.
-mr MultiRegion
Enables the multi-region option.
-rn RegionNumber
Specifies the region number.
-s3 on | off
Specifies whether s3 is to be turned on or off.
perfmon
Sets Performance Monitoring specific properties to be used during installation and configuration:
-r on | off
Specifies if the install toolkit can reconfigure Performance Monitoring.
Note: When set to on, reconfiguration might move the collector to different nodes and it might reset sensor data. Custom sensors and data might be erased.
-d on | off
Specifies if Performance Monitoring should be disabled (not installed).
Note: When set to on, pmcollector and pmsensor packages are not installed or upgraded. Existing sensor or collector state remains as is.
-l
Lists the current settings in the configuration.
ntp
Used to add, list, or remove NTP nodes to the configuration. NTP nodes will be configured on the cluster as follows: the admin node will point to the upstream NTP servers that you provide to determine the correct time. The rest of the nodes in the cluster will point to the admin node to obtain the time.
-s Upstream_Server
Specifies the host name that will be used. You can use an upstream server that you have already configured, but it cannot be part of your Spectrum Scale cluster.
Note: NTP works best with at least four upstream servers. If you provide fewer than four, you will receive a warning during installation advising that you add more.
-l List
Lists the current settings of your NTP setup.
-e on | off
Specifies whether NTP is enabled or not. If this option is turned to off, you will receive a warning during installation.
clear
Removes specified properties from the cluster definition file:
gpfs
Removes GPFS related properties from the cluster definition file:
-c
Clears the GPFS cluster name.
-p
Clears the GPFS profile to be applied on cluster creation. The following values are accepted:
default
Specifies that the GpfsProtocolDefaults profile is to be cleared.
randomio
Specifies that the GpfsProtocolRandomIO profile is to be cleared.
-r RemoteShell
Clears the absolute path name of the remote shell command GPFS uses for node communication. For example, /usr/bin/ssh.
-rc RemoteFileCopy
Clears the absolute path name of the remote copy command GPFS uses when transferring files between nodes. For example, /usr/bin/scp.
-e EphemeralPortRange
Clears the GPFS daemon communication port range.
--all
Clears all settings in the cluster definition file.
protocols
Removes protocols related properties from the cluster definition file:
-f
Clears the shared file system name.
-m
Clears the shared file system mount point.
-e
Clears a comma-separated list of additional CES export IPs to configure on the cluster.
--all
Clears all settings in the cluster definition file.
object
Removes object related properties from the cluster definition file:
-f
Clears the object file system name.
-m
Clears the absolute path to your file system on which the objects reside.
-e
Clears the host name which maps to all CES IP addresses in a round-robin manner.
-o
Clears the GPFS fileset to be created or used as the object base.
-i
Clears the GPFS fileset inode allocation to be used by the object base.
-t
Clears the admin token to be used by Keystone.
-au
Clears the user name for the admin user.
-ap
Clears the password for the admin user.
-su
Clears the user name for the Swift user.
-sp
Clears the password for the Swift user.
-dp
Clears the password for the object database.
-s3
Clears the S3 API setting, if it is enabled.
-mr
Clears the multi-region data file path.
-rn
Clears the region number for the multi-region configuration.
--all

Clears all settings in the cluster definition file.

update
Updates operating system and CPU architecture fields in the cluster definition file. This update is automatically done if you run the upgrade precheck with the spectrumscale upgrade --precheck command while upgrading to IBM Spectrum Scale release 4.2.2 or later.
populate
Populates the cluster definition file with the current cluster state. In the following upgrade scenarios, you might need to update the cluster definition file with the current cluster state:
  • A manually created cluster in which you want to use the installation toolkit to perform administration tasks on the cluster such as adding protocols, adding nodes, and upgrading.
  • A cluster created using the installation toolkit in which manual changes were done without using the toolkit wherein you want to synchronize the installation toolkit with the updated cluster configuration that was performed manually.
--node Node
Specifies an existing node in the cluster that is used to query the cluster information. If you want to use the spectrumscale config populate command to retrieve data from a cluster containing ESS, you must specify the EMS node with the --node flag.
nsd
Used to add, remove, list or balance NSDs, as well as add file systems in the cluster definition file. This command only interacts with this configuration file and does not directly configure NSDs on the cluster itself. The NSDs that have an entry in the cluster definition file will be used during install. This option accepts the following arguments:
add
Adds an NSD to the configuration, according to the following specifications:
-p Primary
Specifies the primary NSD server name.
-s Secondary
Specifies the secondary NSD server names. You can use a comma-separated list to specify up to seven secondary NSD servers.
-fs FileSystem
Specifies the file system to which the NSD is assigned.
-po Pool
Specifies the file system pool.
-u
Specifies NSD usage. The following values are accepted:
dataOnly
dataAndMetadata
metaDataOnly
descOnly
localCache
-fg FailureGroup
Specifies the failure group to which the NSD belongs.
--no-check
Specifies not to check for the device on the server.
PrimaryDevice
Specifies the device name on the primary NSD server.
balance
Balances the NSD preferred node between the primary and secondary nodes. The following options are accepted:
--node Node
Specifies the node to move NSDs from when balancing.
--all
Specifies that all NSDs are to be balanced.
delete NSD
Removes the specified NSD from the configuration.
modify NSD
Modifies the NSD parameters on the specified NSD, according to the following options:
-n Name
Specifies the name.
-u
The following values are accepted:
dataOnly
dataAndMetadata
metadataOnly
descOnly
localCache
-po Pool
Specifies the pool
-fs FileSystem
Specifies the file system.
-fg FailureGroup
Specifies the failure group.
servers
Adds and removes servers, and sets the primary server for NSDs.
clear
Clears the current NSD configuration. The following option is accepted:
-f
Forces the action without manual confirmation.
list
Lists the NSDs configured in your environment.
filesystem
Used to list or modify file systems in the cluster definition file. This command only interacts with this configuration file and does not directly modify file systems on the cluster itself. To modify the properties of a file system in the cluster definition file, the file system must first be added with spectrumscale nsd. This option accepts the following arguments:
modify
Modifies the file system attributes. This option accepts the following arguments:
-B
Specifies the file system block size. This argument accepts the following values: 64K, 128K, 256K, 512K, 1M, 2M, 4M, 8M, 16M.
-m MountPoint
Specifies the mount point.
-r
Specifies the number of copies of each data block for a file. This argument accepts the following values: 1, 2, 3.
-mr
Specifies the number of copies of inodes and directories. This argument accepts the following values: 1, 2, 3.
-MR
Specifies the default maximum number of copies of inodes and directories. This argument accepts the following values: 1, 2, 3.
-R
Specifies the default maximum number of copies of each data block for a file. This argument accepts the following values: 1, 2, 3.
--metadata_block_size
Specifies the file system meta data block size. This argument accepts the following values: 64K, 128K, 256K, 512K, 1M, 2M, 4M, 8M, 16M.
--fileauditloggingenable
Enables file audit logging on the specified file system.
--fileauditloggingdisable
Disables file audit logging on the specified file system.
--logfileset LogFileset
Specifies the log fileset name for file audit logging. The default value is .audit_log.
--retention RetentionPeriod
Specifies the file audit logging retention period in number of days. The default value is 365 days.
FileSystem
Specifies the file system to be modified.
list
Lists the file systems configured in your environment.
fileauditlogging
Used to enable, disable, or list file audit logging configuration in the cluster definition file.
enable
Enables the file audit logging configuration in the cluster definition file.
disable
Disables the file audit logging configuration in the cluster definition file.
list
Lists the file audit logging configuration in the cluster definition file.
callhome
Used to enable, disable, configure, schedule, or list call home configuration in the cluster definition file.
enable
Enables call home in the cluster definition file.
disable
Disables call home in the cluster definition file. The call home function is enabled by default in the cluster definition file. If you disable it in the cluster definition file, the call home packages are installed on the nodes but no configuration is done by the installation toolkit.
config
Configures call home settings in the cluster definition file.
-n CustomerName
Specifies the customer name for the call home configuration.
-i CustomerID
Specifies the customer ID for the call home configuration.
-e CustomerEmail
Specifies the customer email address for the call home configuration.
-cn CustomerCountry
Specifies the customer country code for the call home configuration.
-s ProxyServerIP
Specifies the proxy server IP address for the call home configuration. This is an optional parameter.

Start of changeIf you are specifying the proxy server IP address, the proxy server port must also be specified.End of change

-pt ProxyServerPort
Specifies the proxy server port for the call home configuration. This is an optional parameter.

Start of changeIf you are specifying the proxy server port, the proxy server IP address must also be specified.End of change

-u ProxyServerUserName
Specifies the proxy server user name for the call home configuration. This is an optional parameter.
-pw ProxyServerPassword
Specifies the proxy server password for the call home configuration. This is an optional parameter.

If you do not specify a password on the command line, you are prompted for a password.

-a
When you specify the call home configuration settings by using the ./spectrumscale callhome config, you are prompted to accept or decline the support information collection message. Use the -a parameter to accept that message in advance. This is an optional parameter.

If you do not specify the -a parameter on the command line, you are prompted to accept or decline the support information collection message.

Clear
Clears the specified call home settings from the cluster definition file.
--all
Clears all call home settings from the cluster definition file.
-n
Clears the customer name from the call home configuration in the cluster definition file.
-i
Clears the customer ID from the call home configuration in the cluster definition file.
-e
Clears the customer email address from the call home configuration in the cluster definition file.
-cn
Clears the customer country code from the call home configuration in the cluster definition file.
-s
Clears the proxy server IP address from the call home configuration in the cluster definition file.
-pt
Clears the proxy server port from the call home configuration in the cluster definition file.
-u
Clears the proxy server user name from the call home configuration in the cluster definition file.
-pw
Clears the proxy server password from the call home configuration in the cluster definition file.
schedule
Specifies the call home data collection schedule in the cluster definition file.

By default, the call home data collection is enabled in the cluster definition file and it is set for a daily and a weekly schedule. Daily data uploads are by default executed at 02:xx AM each day. Weekly data uploads are by default executed at 03:xx AM each Sunday. In both cases, xx is a random number from 00 to 59. You can use the spectrumscale callhome schedule command to set either a daily or a weekly call home data collection schedule.

-d
Specifies a daily call home data collection schedule.

If call home data collection is scheduled daily, data uploads are executed at 02:xx AM each day. xx is a random number from 00 to 59.

-w
Specifies a weekly call home data collection schedule.

If call home data collection is scheduled weekly, data uploads are executed at 03:xx AM each Sunday. xx is a random number from 00 to 59.

Start of change-cEnd of change
Start of changeClears the call home data collection schedule in the cluster definition file.

The call home configuration can still be applied without a schedule being set. In that case, you either need to manually run and upload data collections or you can set the call home schedule to the desired interval at a later time with Daily: ./spectrumscale callhome schedule -d, Weekly: ./spectrumscale callhome schedule -w, or Both Daily and Weekly: ./spectrumscale callhome schedule -d -w commands.

End of change
list
Lists the call home configuration specified in the cluster definition file.
auth
Used to configure either Object or File authentication on protocols in the cluster definition file. This command only interacts with this configuration file and does not directly configure authentication on the protocols. To configure authentication on the GPFS cluster during a deploy, authentication settings must be provided through the use of a template file. This option accepts the following arguments:
file
Specifies file authentication.
One of the following must be specified:
ldap
ad
nis
none
object
Specifies object authentication.
Either of the following options are accepted:
--https
One of the following must be specified:
local
external
ldap
ad

Both file and object authentication can be set up with the authentication backend server specified. Running this command will open a template settings file to be filled out before installation.

commitsettings
Merges authentication settings into the main cluster definition file.
clear
Clears your current authentication configuration.
enable
Used to enable Object, SMB or NFS in the cluster definition file. This command only interacts with this configuration file and does not directly enable any protocols on the GPFS cluster itself. The default configuration is that all protocols are disabled. If a protocol is enabled in the cluster definition file, this protocol will be enabled on the GPFS cluster during deploy. This option accepts the following arguments:
obj
Object
nfs
NFS
smb
SMB
disable
Used to disable Object, SMB or NFS in the cluster definition file. This command only interacts with this configuration file and does not directly disable any protocols on the GPFS cluster itself. The default configuration is that all protocols are disabled, so this command is only necessary if a protocol has previously been enabled in the cluster definition file, but is no longer required.
Note: Disabling a protocol in the cluster definition will not disable this protocol on the GPFS cluster during a deploy, it merely means that this protocol will not be enabled during a deploy.

This option accepts the following arguments:

obj
Object
CAUTION:
Disabling object service discards OpenStack Swift configuration and ring files from the CES cluster. If OpenStack Keystone configuration is configured locally, disabling object storage also discards the Keystone configuration and database files from the CES cluster. However, the data is not removed. For subsequent object service enablement with a clean configuration and new data, remove object store fileset and set up object environment. See the mmobj swift base command. For more information, contact the IBM Support Center.
nfs
NFS
smb
SMB
install
Installs, creates a GPFS cluster, creates NSDs and adds nodes to an existing GPFS cluster. The installation toolkit will use the environment details in the cluster definition file to perform these tasks. If all configuration steps have been completed, this option can be run with no arguments (and pre-install and post-install checks will be performed automatically).
For a "dry-run," the following arguments are accepted:
-pr
Performs a pre-install environment check.
-po
Performs a post-install environment check.
-s SecretKey
Specifies the secret key on the command line required to decrypt sensitive data in the cluster definition file and suppresses the prompt for the secret key.
-f
Forces action without manual confirmation.
deploy
Creates file systems, deploys protocols, and configures protocol authentication on an existing GPFS cluster. The installation toolkit will use the environment details in the cluster definition file to perform these tasks. If all configuration steps have been completed, this option can be run with no arguments (and pre-deploy and post-deploy checks will be performed automatically). However, the secret key will be prompted for unless it is passed in as an argument using the -s flag.

For a "dry-run," the following arguments are accepted:

-pr
Performs a pre-deploy environment check.
-po
Performs a post-deploy environment check.
-s SecretKey
Specifies the secret key on the command line required to decrypt sensitive data in the cluster definition file and suppresses the prompt for the secret key.
-f
Forces action without manual confirmation.
upgrade
Upgrades all components of an existing GPFS cluster. This command can still be used even if all protocols are not enabled. If a protocol is not enabled, then the packages will still be upgraded, but the service won't be started.
The installation toolkit will use environment details in the cluster definition file to perform these tasks. To perform environment health checks prior to and after the upgrade run the spectrumscale upgrade command using the -pr and -po arguments. This is not required, however, because upgrade with no arguments will also run this. The following arguments are accepted:
-ve
Shows the current versions of installed packages and the available version to upgrade to
-pr
Performs health checks on the cluster prior to the upgrade
-po
Performs health checks on the cluster after the upgrade has been completed
-f
Forces action without manual confirmation.
installgui
Invokes the installation GUI that can be used to install the IBM Spectrum Scale software on cluster nodes, create an IBM Spectrum Scale cluster, and configure NTP. The installation GUI is used only for installing the system and a separate management GUI needs to be used for configuring and managing the system. The installation GUI cannot be used to upgrade the software in an existing IBM Spectrum Scale system. For more information, see Installing IBM Spectrum Scale by using the graphical user interface (GUI).
start
Starts the installation GUI
status
Displays the status of the processes that are running on the installation GUI
stop
Stops the installation GUI through the CLI. The installation process through the GUI automatically stops when you exit the installation GUI.

Exit status

0
Successful completion.
nonzero
A failure has occurred.

Security

You must have root authority to run the spectrumscale command.

The node on which the command is issued must be able to execute remote shell commands on any other node in the cluster without the use of a password and without producing any extraneous messages. For more information, see Requirements for administering a GPFS file system.

Examples

Creating a new IBM Spectrum Scale cluster

  1. To instantiate your chef zero server, issue a command similar to the following:
    spectrumscale setup -s 192.168.0.1
  2. To designate NSD server nodes in your environment to use for the installation, issue this command:
    ./spectrumscale node add FQDN -n
  3. To add four non-shared NSDs seen by a primary NSD server only, issue this command:
    ./spectrumscale nsd add -p FQDN_of_Primary_NSD_Server /dev/dm-1 /dev/dm-2 /dev/dm-3 /dev/dm-4
  4. To add four non-shared NSDs seen by both a primary NSD server and a secondary NSD server, issue this command:
    ./spectrumscale nsd add -p FQDN_of_Primary_NSD_Server -s FQDN_of_Secondary_NSD_Server\
     /dev/dm-1 /dev/dm-2 /dev/dm-3 /dev/dm-4
  5. To define a shared root file system using two NSDs and a file system fs1 using two NSDs, issue these commands:
    ./spectrumscale nsd list
    ./spectrumscale file system list
    ./spectrumscale nsd modify nsd1 -fs cesSharedRoot
    ./spectrumscale nsd modify nsd2 -fs cesSharedRoot
    ./spectrumscale nsd modify nsd3 -fs fs1
    ./spectrumscale nsd modify nsd4 -fs fs1
  6. To designate GUI nodes in your environment to use for the installation, issue this command:
    ./spectrumscale node add FQDN -g -a
  7. To designate additional client nodes in your environment to use for the installation, issue this command:
    ./spectrumscale node add FQDN
  8. To allow the installation toolkit to reconfigure Performance Monitoring if it detects any existing configurations, issue this command:
    ./spectrumscale config perfmon -r on
  9. To name your cluster, issue this command:
    ./spectrumscale config gpfs -c Cluster_Name
  10. To configure the call home function with the mandatory parameters, issue this command:
    ./spectrumscale callhome config -n username -i 456123 -e username@example.com -cn US
    If you do not want to use call home, disable it by issuing the following command:
    ./spectrumscale callhome disable
    For more information, see Enabling and configuring call home using the installation toolkit.
  11. To review the configuration prior to installation, issue these commands:
    ./spectrumscale node list
    ./spectrumscale nsd list
    ./spectrumscale filesystem list
    ./spectrumscale config gpfs --list
  12. To start the installation on your defined environment, issue these commands:
    ./spectrumscale install --precheck
    ./spectrumscale install
  13. To deploy file systems after a successful installation, do one of the following depending on your requirement:
    • If you want to use only the file systems, issue these commands:
      ./spectrumscale deploy --precheck
      ./spectrumscale deploy
    • If you want to deploy protocols also, see the examples in the Deploying protocols on an existing cluster section.

Deploying protocols on an existing cluster

Note: If your cluster contains ESS, see the Adding protocols to a cluster containing ESS section.
  1. To instantiate your chef zero server, issue a command similar to the following:
    spectrumscale setup -s 192.168.0.1
  2. To designate protocol nodes in your environment to use for the deployment, issue this command:
    ./spectrumscale node add FQDN -p
  3. To designate GUI nodes in your environment to use for the deployment, issue this command:
    ./spectrumscale node add FQDN -g -a
  4. To configure protocols to point to a file system that will be used as a shared root, issue this command:
    ./spectrumscale config protocols -f FS_Name -m FS_Mountpoint
  5. To configure a pool of export IPs, issue this command:
    ./spectrumscale config protocols -e Comma_Separated_List_of_Exportpool_IPs
  6. To enable NFS on all protocol nodes, issue this command:
    ./spectrumscale enable nfs
  7. To enable SMB on all protocol nodes, issue this command:
    ./spectrumscale enable smb
  8. To enable Object on all protocol nodes, issue these commands:
    ./spectrumscale enable object
    ./spectrumscale config object -au Admin_User -ap Admin_Password -dp Database_Password
    ./spectrumscale config object -e FQDN
    ./spectrumscale config object -f FS_Name -m FS_Mountpoint
    ./spectrumscale config object -o Object_Fileset 
  9. To enable file audit logging, issue the following command:
    ./spectrumscale fileauditlogging enable

    For more information, see Enabling and configuring file audit logging using the installation toolkit.

  10. To review the configuration prior to deployment, issue these commands:
    ./spectrumscale config protocols
    ./spectrumscale config object
    ./spectrumscale node list
  11. To deploy protocols on your defined environment, issue these commands:
    ./spectrumscale deploy --precheck
    ./spectrumscale deploy

Deploying protocol authentication

Note: For the following example commands, it is assumed that the protocols cluster was deployed successfully using the spectrumscale command options.
  1. To enable file authentication with AD server on all protocol nodes, issue this command:
    ./spectrumscale auth file ad

    Fill out the template and save the information, and then issue the following commands:

    ./spectrumscale deploy --precheck
    ./spectrumscale deploy
  2. To enable Object authentication with AD server on all protocol nodes, issue this command:
    ./spectrumscale auth object ad

    Fill out the template and save the information, and then issue the following commands:

    ./spectrumscale deploy --precheck
    ./spectrumscale deploy

Upgrading an IBM Spectrum Scale cluster

  1. Extract the IBM Spectrum Scale package for the required code level by issuing a command similar to the following depending on the package name:
    ./Spectrum_Scale_Protocols_Standard-5.0.x.x-xxxxx
  2. Copy the cluster definition file from the prior installation to the latest installer location by issuing this command:
    cp -p /usr/lpp/mmfs/4.2.3.0/installer/configuration/clusterdefinition.txt\
     /usr/lpp/mmfs/5.0.1.0/installer/configuration/
    Note: This is a command example of a scenario where you are upgrading the system from IBM Spectrum Scale 4.2.3.0 to 5.0.1.0.

    You can populate the cluster definition file with the current cluster state by issuing the spectrumscale config populate command.

  3. Run the upgrade precheck from the installer directory of the latest code level extraction by issuing commands similar to the following:
    cd /usr/lpp/mmfs/Latest_Code_Level_Directory/installer
    ./spectrumscale upgrade --precheck
    Note: If you are upgrading to IBM Spectrum Scale version 4.2.2, the upgrade precheck updates the operating system and CPU architecture fields in the cluster definition file. You can also update the operating system and CPU architecture fields in the cluster definition file by issuing the spectrumscale config update command.
  4. Run the upgrade by issuing this command:
    cd /usr/lpp/mmfs/Latest_Code_Level_Directory/installer
    ./spectrumscale upgrade

Adding to an installation process

  1. To add nodes to an installation, do the following:
    1. Add one or more node types using the following commands:
      • Client nodes:
        ./spectrumscale node add FQDN
      • NSD nodes:
        ./spectrumscale node add FQDN -n 
      • Protocol nodes:
        ./spectrumscale node add FQDN -p
      • GUI nodes:
        ./spectrumscale node add FQDN -g -a
    2. Install GPFS on the new nodes using the following commands:
      ./spectrumscale install --precheck
      ./spectrumscale install
    3. If protocol nodes are being added, deploy protocols using the following commands:
      ./spectrumscale deploy --precheck
      ./spectrumscale deploy
  2. To add NSDs to an installation, do the following:
    1. Verify that the NSD server connecting this new disk runs an OS compatible with the installation toolkit and that the NSD server exists within the cluster.
    2. Add NSDs to the installation using the following command:
      ./spectrumscale nsd add -p FQDN_of_Primary_NSD_Server Path_to_Disk_Device_File
    3. Run the installation using the following commands:
      ./spectrumscale install --precheck
      ./spectrumscale install
  3. To add file systems to an installation, do the following:
    1. Verify that free NSDs exist and that they can be listed by the installation toolkit using the following commands.
      mmlsnsd
      ./spectrumscale nsd list
    2. Define the file system using the following command:
      ./spectrumscale nsd modify NSD -fs File_System_Name
    3. Deploy the file system using the following commands:
      ./spectrumscale deploy --precheck
      ./spectrumscale deploy
  4. To enable another protocol on an existing cluster that has protocols enabled, do the following steps depending on your configuration:
    1. Enable NFS on all protocol nodes using the following command:
      ./spectrumscale enable nfs
    2. Enable SMB on all protocol nodes using the following command:
      ./spectrumscale enable smb
    3. Enable Object on all protocol nodes using the following commands:
      ./spectrumscale enable object
      ./spectrumscale config object -au Admin_User -ap Admin_Password -dp Database_Password
      ./spectrumscale config object -e FQDN
      ./spectrumscale config object -f FS_Name -m FS_Mountpoint
      ./spectrumscale config object -o Object_Fileset 
    4. Enable the new protocol using the following commands:
      ./spectrumscale deploy --precheck
      ./spectrumscale deploy

Using the installation toolkit in cluster containing ESS

For a list of assumptions and restrictions for using the installation toolkit in a cluster containing ESS, see the -st SetupType option. When using the installation toolkit in a cluster containing ESS, use the following high-level steps.
  1. Add protocol nodes in the ESS cluster by issuing the following command.
    ./spectrumscale node add NodeName -p
    You can add other types of nodes such as client nodes, NSD servers, and so on depending on your requirements. For more information, see Defining the cluster topology for the installation toolkit.
  2. Specify one of the newly added protocol nodes as the installer node and specify the setup type as ess by issuing the following command.
    ./spectrumscale setup -s NodeIP -i SSHIdentity -st ess

    The installer node is the node on which the installation toolkit is extracted and from where the installation toolkit command, spectrumscale, is initiated.

  3. Specify the EMS node of the ESS system to the installation toolkit by issuing the following command.
    ./spectrumscale node add NodeName -e

    This node is also automatically specified as the admin node. The admin node, which must be the EMS node in an ESS configuration, is the node that has access to all other nodes to perform configuration during the installation.

  4. Proceed with specifying other configuration options, installing, and deploying by using the installation toolkit. For more information, see Defining the cluster topology for the installation toolkit, Installing GPFS and creating a GPFS cluster, and Deploying protocols.
For more information, see ESS awareness with the installation toolkit.

Manually adding protocols to a cluster containing ESS

For information on preparing a cluster that contains ESS for deploying protocols, see Preparing a cluster that contains ESS for adding protocols.

After you have prepared your cluster that contains ESS for adding protocols, you can use commands similar to the ones listed in the Deploying protocols on an existing cluster section.

Diagnosing an error during install, deploy, or upgrade

  1. Note the screen output indicating the error. This helps in narrowing down the general failure.

    When a failure occurs, the screen output also shows the log file containing the failure.

  2. Open the log file in an editor such as vi.
  3. Go to the end of the log file and search upwards for the text FATAL.
  4. Find the topmost occurrence of FATAL (or first FATAL error that occurred) and look above and below this error for further indications of the failure.

For more information, see Finding deployment related error messages more easily and using them for failure analysis.

Location

/usr/lpp/mmfs/5.0.1.0/installer