spectrumscale command

Installs and configures GPFS™; adds nodes to a cluster; deploys and configures protocols, performance monitoring tools, and authentication services; and upgrades GPFS and protocols.

Synopsis

spectrumscale setup [-i SSHIdentity] [-s ServerIP] [--storesecret]

or

spectrumscale node add [-g] [-q] [-m] [-a] [-n] [-p [ExportIP]] Node

or

spectrumscale node load [-q] [-m] [-a] [-n] NodeFile

or

spectrumscale node delete [-f] Node

or

spectrumscale node clear [-f]

or

spectrumscale node list

or

spectrumscale config gpfs [-l] [-c ClusterName] [-p {default | randomio}]
                          [-r RemoteShell] [-rc RemoteFileCopy]
                          [-e EphemeralPortRange]

or

spectrumscale config protocols [-l] [-f FileSystem] [-m MountPoint] [-e ExportIPPool]

or

spectrumscale config object [-f FileSystem] [-m MountPoint][-e EndPoint] [-o ObjectBase]
                            [-i InodeAllocation] [-t AdminToken]
                            [-au AdminUser] [-ap AdminPassword]
                            [-su SwiftUser] [-sp SwiftPassword]
                            [-dp DatabasePassword]
                            [-mr MultiRegion] [-rn RegionNumber]
                            [-s3 {on | off}] 

or

spectrumscale config perfmon [-r {on | off}] [-d {on | off}] [-l]

or

spectrumscale config ntp [-e {on | off} [-l List ][-s Upstream_Servers]]

or

spectrumscale config clear {gpfs | protocols | object}

or

Start of change
spectrumscale config update
End of change

or

spectrumscale nsd add -p Primary [-s Secondary] [-fs FileSystem]
                      [-po Pool]
                      [-u {dataOnly | dataAndMetadata | metaDataOnly | descOnly | localCache}]
                      [-fg FailureGroup] [--no-check]
                      PrimaryDevice [PrimaryDevice ...]

or

spectrumscale nsd balance [--node Node | --all]

or

spectrumscale nsd delete NSD

or

spectrumscale nsd modify [-n Name]
                         [-u {dataOnly | dataAndMetadata | metadataOnly | descOnly}]
                         [-po Pool] [-fs FileSystem] [-fg FailureGroup]
                         NSD

or

spectrumscale nsd servers 

or

spectrumscale nsd clear [-f]

or

spectrumscale nsd list 

or

spectrumscale filesystem modify [-b {64K | 128K | 256K | 512K | 1M | 2M | 4M | 8M | 16M}] 
                                [-m MountPoint]
                                FileSystem

or

spectrumscale filesystem list

or

spectrumscale auth file {ldap | ad | nis | none}  

or

spectrumscale auth  object [--https] [--pki] {local | external | ldap | ad}  

or

spectrumscale auth  commitsettings  

or

spectrumscale auth  clear  

or

spectrumscale enable {object | nfs | smb} 

or

spectrumscale disable {object | nfs | smb} 
CAUTION:
Disabling object service discards OpenStack Swift configuration and ring files from the CES cluster. If OpenStack Keystone configuration is configured locally, disabling object storage also discards the Keystone configuration and database files from the CES cluster. However, the data is not removed. For subsequent object service enablement with a clean configuration and new data, remove object store fileset and set up object environment. See the mmobj swift base command. For more information, contact the IBM® Support Center.

or

spectrumscale install [-pr] [-po] [-s] [-f]

or

spectrumscale deploy [-pr] [-po] [-s] [-f]

or

spectrumscale upgrade [-pr  | -po | -ve] [-f]                

Start of changeorEnd of change

Start of change
spectrumscale installgui {start  | stop | status}
End of change

Availability

The spectrumscale is available as follows:
  • Available with IBM Spectrum Scale™ Standard Edition or higher.

Description

Use the spectrumscale command (also called the spectrumscale installation toolkit) to do the following:
  • Install and configure GPFS.
  • Add GPFS nodes to an existing cluster.
  • Deploy and configure SMB, NFS, OpenStack Swift, and performance monitoring tools on top of GPFS.
  • Configure authentication services for protocols.
  • Upgrade GPFS and protocols.
Note: The following prerequisites and assumptions apply:
  • The spectrumscale installation toolkit requires the following package:
    • python-2.7
  • TCP traffic from the nodes should be allowed through the firewall to communicate with the install toolkit on port 8889 for communication with the chef zero server and port 10080 for package distribution.
  • The nodes themselves have external Internet access or local repository replicas that can be reached by the nodes to install necessary packages (dependency installation). For more information, see the Repository setup section of the Installation prerequisites topic in IBM Spectrum Scale: Concepts, Planning, and Installation Guide .
  • To install protocols, there must a GPFS cluster running a minimum version of 4.1.1.0 with CCR enabled.
  • The node that you plan to run the install toolkit from must be able to execute remote shell commands on any other node in the cluster without the use of a password and without producing any extraneous messages.

Parameters

setup
Installs Chef and its components, as well as configure the install node in the cluster definition file. The IP address passed in should be the node from which the spectrumscale installation toolkit will be run. The SSH key passed in should be the key the installer should use to have passwordless SSH onto all other nodes. This is the first command you will run to set up IBM Spectrum Scale. This option accepts the following arguments:
-i SSHIdentity
Adds the path to the SSH identity file into the configuration.
-s ServerIP
Adds the control node IP into the configuration.
--storesecret
Disables the prompts for the encryption secret.
CAUTION:
If you use this option, passwords will not be securely stored.

This is the first command to run to set up IBM Spectrum Scale.

node
Used to add, remove, or list nodes in the cluster definition file. This command only interacts with this configuration file and does not directly configure nodes in the cluster itself. The nodes that have an entry in the cluster definition file will be used during install, deploy, or upgrade. This option accepts the following arguments:
add Node
Adds the specified node and configures it according to the following arguments:
-g
Adds GPFS Graphical User Interface servers to the cluster definition file.
-q
Configures the node as a quorum node.
-m
Configures the node as a manager node.
-a
Configures the node as an admin node.
-n
Specifies the node as NSD.
-p [ExportIP]
Configures the node as a protocol node and optionally assigns it an IP.
Node
Specifies the node name.
load NodeFile
Loads the specified file containing a list of nodes, separated per line; adds the nodes in the file and configures them according to the following:
-q
Configures the node as a quorum node.
-m
Configures the node as a manager node.
-a
Configures the node as an admin node.
-n
Specifies the node to be NSD.
delete Node
Removes the specified node from the configuration. The following option is accepted.
-f
Forces the action without manual confirmation.
clear
Clears the current node configuration. The following option is accepted:
-f
Forces the action without manual confirmation.
list
Lists the nodes configured in your environment.
config
Used to set properties in the cluster definition file that will be used during install, deploy, or upgrade. This command only interacts with this configuration file and does not directly configure these properties on the GPFS cluster. This option accepts the following arguments:
gpfs
Sets any of the following GPFS-specific properties to be used during GPFS installation and configuration:
-l
Lists the current settings in the configuration.
-c ClusterName
-p
Specifies the profile to be set on cluster creation. The following values are accepted:
default
Specifies that the GpfsProtocolDefaults profile is to be used.
randomio
Specifies that the GpfsProtocolRandomIO profile is to be used.
-r RemoteShell
Specifies the remote shell binary to be used by GPFS. If no remote shell is specified in the cluster definition file, /usr/bin/ssh will be used as the default.
-rc RemoteFileCopy
Specifies the remote file copy binary to be used by GPFS. If no remote file copy binary is specified in the cluster definition file, /usr/bin/scp will be used as the default.
-e EphemeralPortRange

Specifies an ephemeral port range to be set on all GPFS nodes. If no port range is specified in the cluster definition, 60000-61000 will be used as default.

For information about ephemeral port range, see the topic about GPFS port usage in Miscellaneous advanced administration topics.

protocols
Provides details of the GPFS environment that will be used during protocol deployment, according to the following options:
-l
Lists the current settings in the configuration.
-f FileSystem
Specifies the file system.
-m MountPoint
Specifies the mount point.
-e ExportIPPool
Specifies a comma-separated list of additional CES export IPs to configure on the cluster.
object
Sets any of the following Object-specific properties to be used during Object deployment and configuration:
-l
Lists the current settings in the configuration.
-f FileSystem
Specifies the file system.
-m MountPoint
Specifies the mount point.
-e EndPoint
Specifies the host name that will be used for access to the object store. This should be a round-robin DNS entry that maps to all CES IP addresses or the address of a load balancer front end; this will distribute the load of all keystone and object traffic that is routed to this host name. Therefore the endpoint is an IP address in a DNS or in a load balancer that maps to a group of export IPs (that is, CES IPs that were assigned on the protocol nodes).
-o ObjectBase
Specifies the object base.
-i InodeAllocation
Specifies the inode allocation.
-t AdminToken
Specifies the admin token.
-au AdminUser
Specifies the user name for the admin.
-ap AdminPassword
Specifies the admin user password. This credential is for the Keystone administrator. This user can be local or on remote authentication server based on the authentication type used.
Note: You will be prompted to enter a Secret Encryption Key which will be used to securely store the password. Choose a memorable pass phrase which you will be prompted for each time you enter the password.
-su SwiftUser
Specifies the Swift user name. This credential is for the Swift services administrator. All Swift services are run in this user's context. This user can be local or on remote authentication server based on the authentication type used.
-sp SwiftPassword
Specifies the Swift user password.
Note: You will be prompted to enter a Secret Encryption Key which will be used to securely store the password. Choose a memorable pass phrase which you will be prompted for each time you enter the password.
-dp DataBasePassword
Specifies the object database user password.
Note: You will be prompted to enter a Secret Encryption Key which will be used to securely store the password. Choose a memorable pass phrase which you will be prompted for each time you enter the password.
-mr MultiRegion
Enables the multi-region option.
-rn RegionNumber
Specifies the region number.
-s3 on | off
Specifies whether s3 is to be turned on or off.
Start of changeperfmonEnd of change
Start of changeSets Performance Monitoring specific properties to be used during installation and configuration:
-r on | off
Specifies if the install toolkit can reconfigure Performance Monitoring.
Note: When set to on, reconfiguration might move the collector to different nodes and it might reset sensor data. Custom sensors and data might be erased.
-d on | off
Specifies if Performance Monitoring should be disabled (not installed).
Note: When set to on, pmcollector and pmsensor packages are not installed or upgraded. Existing sensor or collector state remains as is.
-l
Lists the current settings in the configuration.
End of change
ntp
Used to add, list, or remove NTP nodes to the configuration. NTP nodes will be configured on the cluster as follows: the admin node will point to the upstream NTP servers that you provide to determine the correct time. The rest of the nodes in the cluster will point to the admin node to obtain the time.
-sUpstream_Server
Specifies the host name that will be used. You can use an upstream server that you have already configured, but it cannot be part of your Spectrum Scale cluster.
Note: NTP works best with at least four upstream servers. If you provide fewer than four, you will receive a warning during installation advising that you add more.
-lList
Lists the current settings of your NTP setup.
-e on | off
Specifies whether NTP is enabled or not. If this option is turned to off, you will receive a warning during installation.
clear
Removes specified properties from the cluster definition file:
gpfs
Removes GPFS related properties from the cluster definition file:
-c
Clears the GPFS cluster name.
-p
Clears the GPFS profile to be applied on cluster creation. The following values are accepted:
default
Specifies that the GpfsProtocolDefaults profile is to be cleared.
randomio
Specifies that the GpfsProtocolRandomIO profile is to be cleared.
-r RemoteShell
Clears the absolute path name of the remote shell command GPFS uses for node communication. For example, /usr/bin/ssh.
-rc RemoteFileCopy
Clears the absolute path name of the remote copy command GPFS uses when transferring files between nodes. For example, /usr/bin/scp.
-e EphemeralPortRange
Clears the GPFS daemon communication port range.
--all
Clears all settings in the cluster definition file.
protocols
Removes protocols related properties from the cluster definition file:
-f
Clears the shared file system name.
-m
Clears the shared file system mount point.
-e
Clears a comma-separated list of additional CES export IPs to configure on the cluster.
--all
Clears all settings in the cluster definition file.
object
Removes object related properties from the cluster definition file:
-f
Clears the object file system name.
-m
Clears the absolute path to your file system on which the objects reside.
-e
Clears the host name which maps to all CES IP addresses in a round-robin manner.
-o
Clears the GPFS fileset to be created or used as the object base.
-i
Clears the GPFS fileset inode allocation to be used by the object base.
-t
Clears the admin token to be used by Keystone.
-au
Clears the user name for the admin user.
-ap
Clears the password for the admin user.
-su
Clears the user name for the Swift user.
-sp
Clears the password for the Swift user.
-dp
Clears the password for the object database.
-s3
Clears the S3 API setting, if it is enabled.
-mr
Clears the multi-region data file path.
-rn
Clears the region number for the multi-region configuration.
--all

Clears all settings in the cluster definition file.

Start of changeupdateEnd of change
Start of changeUpdates operating system and CPU architecture fields in the cluster definition file. This update is automatically done if you run the upgrade precheck with the spectrumscale upgrade --precheck command while upgrading to IBM Spectrum Scale release 4.2.2 or later.End of change
nsd
Used to add, remove, list or balance NSDs, as well as add file systems in the cluster definition file. This command only interacts with this configuration file and does not directly configure NSDs on the cluster itself. The NSDs that have an entry in the cluster definition file will be used during install. This option accepts the following arguments:
add
Adds an NSD to the configuration, according to the following specifications:
-p Primary
Specifies the primary NSD server name.
-s Secondary
Specifies the secondary NSD server name This option can be repeated to specify multiple secondary NSD servers.
-fs FileSystem
Specifies the file system to which the NSD is assigned.
-po Pool
Specifies the file system pool.
-u
Specifies NSD usage. The following values are accepted:
dataOnly
dataAndMetadata
metaDataOnly
descOnly
localCache
-fg FailureGroup
Specifies the failure group to which the NSD belongs.
--no-check
Specifies not to check for the device on the server.
PrimaryDevice
Specifies the device name on the primary NSD server.
balance
Balances the NSD preferred node between the primary and secondary nodes. The following options are accepted:
--node Node
Specifies the node to move NSDs from when balancing.
--all
Specifies that all NSDs are to be balanced.
delete NSD
Removes the specified NSD from the configuration.
modify NSD
Modifies the NSD parameters on the specified NSD, according to the following options:
-n Name
Specifies the name.
-u
The following values are accepted:
dataOnly
dataAndMetadata
metadataOnly
descOnly
-po Pool
Specifies the pool
-fs FileSystem
Specifies the file system.
-fg FailureGroup
Specifies the failure group.
servers
Adds and removes servers, and sets the primary server for NSDs.
clear
Clears the current NSD configuration. The following option is accepted:
-f
Forces the action without manual confirmation.
list
Lists the NSDs configured in your environment.
filesystem
Used to list or modify file systems in the cluster definition file. This command only interacts with this configuration file and does not directly modify file systems on the cluster itself. To modify the properties of a file system in the cluster definition file, the file system must first be added with spectrumscale nsd. This option accepts the following arguments:
modify
Modifies the file system attributes. This option accepts the following arguments:
-b
Specifies the file system block size. This argument accepts the following values: 64K, 128K, 256K, 512K, 1M, 2M, 4M, 8M, 16M.
-m MountPoint
Specifies the mount point.
FileSystem
Specifies the file system to be modified.
list
Lists the file systems configured in your environment.
auth
Used to configure either Object or File authentication on protocols in the cluster definition file. This command only interacts with this configuration file and does not directly configure authentication on the protocols. To configure authentication on the GPFS cluster during a deploy, authentication settings must be provided through the use of a template file. This option accepts the following arguments:
file
Specifies file authentication.
One of the following must be specified:
ldap
ad
nis
none
object
Specifies object authentication.
Either of the following options are accepted:
--https
--pki
One of the following must be specified:
local
external
ldap
ad

Both file and object authentication can be set up with the authentication backend server specified. Running this command will open a template settings file to be filled out before installation.

commitsettings
Merges authentication settings into the main cluster definition file.
clear
Clears your current authentication configuration.
enable
Used to enable Object, SMB or NFS in the cluster definition file. This command only interacts with this configuration file and does not directly enable any protocols on the GPFS cluster itself. The default configuration is that all protocols are disabled. If a protocol is enabled in the cluster definition file, this protocol will be enabled on the GPFS cluster during deploy. This option accepts the following arguments:
obj
Object
nfs
NFS
smb
SMB
disable
Used to disable Object, SMB or NFS in the cluster definition file. This command only interacts with this configuration file and does not directly disable any protocols on the GPFS cluster itself. The default configuration is that all protocols are disabled, so this command is only necessary if a protocol has previously been enabled in the cluster definition file, but is no longer required.
Note: Disabling a protocol in the cluster definition will not disable this protocol on the GPFS cluster during a deploy, it merely means that this protocol will not be enabled during a deploy.

This option accepts the following arguments:

obj
Object
CAUTION:
Disabling object service discards OpenStack Swift configuration and ring files from the CES cluster. If OpenStack Keystone configuration is configured locally, disabling object storage also discards the Keystone configuration and database files from the CES cluster. However, the data is not removed. For subsequent object service enablement with a clean configuration and new data, remove object store fileset and set up object environment. See the mmobj swift base command. For more information, contact the IBM Support Center.
nfs
NFS
smb
SMB
install
Installs, creates a GPFS cluster, creates NSDs and adds nodes to an existing GPFS cluster. The spectrumscale installation toolkit will use the environment details in the cluster definition file to perform these tasks. If all configuration steps have been completed, this option can be run with no arguments (and pre-install and post-install checks will be performed automatically).
For a "dry-run," the following arguments are accepted:
-pr
Performs a pre-install environment check.
-po
Performs a post-install environment check.
-s SecretKey
Specifies the secret key on the command line required to decrypt sensitive data in the cluster definition file and suppresses the prompt for the secret key.
-f
Forces action without manual confirmation.
deploy
Creates file systems, deploys protocols, and configures protocol authentication on an existing GPFS cluster. The spectrumscale installation toolkit will use the environment details in the cluster definition file to perform these tasks. If all configuration steps have been completed, this option can be run with no arguments (and pre-deploy and post-deploy checks will be performed automatically). However, the secret key will be prompted for unless it is passed in as an argument using the -s flag.

For a "dry-run," the following arguments are accepted:

-pr
Performs a pre-deploy environment check.
-po
Performs a post-deploy environment check.
-s SecretKey
Specifies the secret key on the command line required to decrypt sensitive data in the cluster definition file and suppresses the prompt for the secret key.
-f
Forces action without manual confirmation.
upgrade
Upgrades all components of an existing GPFS cluster. This command can still be used even if all protocols are not enabled. If a protocol is not enabled, then the packages will still be upgraded, but the service won't be started.
The spectrumscale installation toolkit will use environment details in the cluster definition file to perform these tasks. To perform environment health checks prior to and after the upgrade run the spectrumscale upgrade command using the -pr and -po arguments. This is not required, however, because upgrade with no arguments will also run this. The following arguments are accepted:
-ve
Shows the current versions of installed packages and the available version to upgrade to
-pr
Performs health checks on the cluster prior to the upgrade
-po
Performs health checks on the cluster after the upgrade has been completed
-f
Forces action without manual confirmation.
Start of changeinstallguiEnd of change
Start of changeInvokes the installation GUI that can be used to install the IBM Spectrum Scale software on cluster nodes, create an IBM Spectrum Scale cluster, and configure NTP. The installation GUI is used only for installing the system and a separate management GUI needs to be used for configuring and managing the system. The installation GUI cannot be used to upgrade the software in an existing IBM Spectrum Scale system. For more information, see Installing IBM Spectrum Scale by using the graphical user interface (GUI).
start
Starts the installation GUI
status
Displays the status of the processes that are running on the installation GUI
stop
Stops the installation GUI through the CLI. The installation process through the GUI automatically stops when you exit the installation GUI.
End of change

Exit status

0
Successful completion.
nonzero
A failure has occurred.

Security

You must have root authority to run the spectrumscale command.

The node on which the command is issued must be able to execute remote shell commands on any other node in the cluster without the use of a password and without producing any extraneous messages. For more information, see Requirements for administering a GPFS file system.

Examples

Creating a new IBM Spectrum Scale cluster

  1. To instantiate your chef zero server, issue a command similar to the following:
    spectrumscale setup -s 192.168.0.1
  2. To designate NSD server nodes in your environment to use for the installation, issue this command:
    ./spectrumscale node add FQDN -n
  3. To add four non-shared NSDs seen by a primary NSD server only, issue this command:
    ./spectrumscale nsd add -p FQDN_of_Primary_NSD_Server /dev/dm-1 /dev/dm-2 /dev/dm-3 /dev/dm-4
  4. To add four non-shared NSDs seen by both a primary NSD server and a secondary NSD server, issue this command:
    ./spectrumscale nsd add -p FQDN_of_Primary_NSD_Server -s FQDN_of_Secondary_NSD_Server\
     /dev/dm-1 /dev/dm-2 /dev/dm-3 /dev/dm-4
  5. To define a shared root file system using two NSDs and a file system fs1 using two NSDs, issue these commands:
    ./spectrumscale nsd list
    ./spectrumscale file system list
    ./spectrumscale nsd modify nsd1 -fs cesSharedRoot
    ./spectrumscale nsd modify nsd2 -fs cesSharedRoot
    ./spectrumscale nsd modify nsd3 -fs fs1
    ./spectrumscale nsd modify nsd4 -fs fs1
  6. To designate GUI nodes in your environment to use for the installation, issue this command:
    ./spectrumscale node add FQDN -g -a
  7. To designate additional client nodes in your environment to use for the installation, issue this command:
    ./spectrumscale node add FQDN
  8. To allow the installation toolkit to reconfigure Performance Monitoring if it detects any existing configurations, issue this command:
    ./spectrumscale config perfmon -r on
  9. To name your cluster, issue this command:
    ./spectrumscale config config gpfs -c Cluster_Name
  10. To review the configuration prior to installation, issue these commands:
    ./spectrumscale node list
    ./spectrumscale nsd list
    ./spectrumscale filesystem list
    ./spectrumscale config gpfs --list
  11. To start the installation on your defined environment, issue these commands:
    ./spectrumscale install --precheck
    ./spectrumscale install
  12. To deploy file systems after a successful installation, do one of the following depending on your requirement:
    • If you want to use only the file systems, issue these commands:
      ./spectrumscale deploy --precheck
      ./spectrumscale deploy
    • If you want to deploy protocols also, see the examples in the Deploying protocols on an existing cluster section.

Deploying protocols on an existing cluster

Note: If your cluster contains ESS, see the Adding protocols to a cluster containing ESS section.
  1. To instantiate your chef zero server, issue a command similar to the following:
    spectrumscale setup -s 192.168.0.1
  2. To designate protocol nodes in your environment to use for the deployment, issue this command:
    ./spectrumscale node add FQDN -p
  3. To designate GUI nodes in your environment to use for the deployment, issue this command:
    ./spectrumscale node add FQDN -g -a
  4. To configure protocols to point to a file system that will be used as a shared root, issue this command:
    ./spectrumscale config protocols -f FS_Name -m FS_Mountpoint
  5. To configure a pool of export IPs, issue this command:
    ./spectrumscale config protocols -e Comma_Separated_List_of_Exportpool_IPs
  6. To enable NFS on all protocol nodes, issue this command:
    ./spectrumscale enable nfs
  7. To enable SMB on all protocol nodes, issue this command:
    ./spectrumscale enable smb
  8. To enable Object on all protocol nodes, issue these commands:
    ./spectrumscale enable object
    ./spectrumscale config object -au Admin_User -ap Admin_Password -dp Database_Password
    ./spectrumscale config object -e FQDN
    ./spectrumscale config object -f FS_Name -m FS_Mountpoint
    ./spectrumscale config object -o Object_Fileset 
  9. To review the configuration prior to deployment, issue these commands:
    ./spectrumscale config protocols
    ./spectrumscale config object
    ./spectrumscale node list
  10. To deploy protocols on your defined environment, issue these commands:
    ./spectrumscale deploy --precheck
    ./spectrumscale deploy

Deploying protocol authentication

Note: For the following example commands, it is assumed that the protocols cluster was deployed successfully using the spectrumscale command options.
  1. To enable file authentication with AD server on all protocol nodes, issue this command:
    ./spectrumscale auth file ad

    Fill out the template and save the information, and then issue the following commands:

    ./spectrumscale deploy --precheck
    ./spectrumscale deploy
  2. To enable Object authentication with AD server on all protocol nodes, issue this command:
    ./spectrumscale auth object ad

    Fill out the template and save the information, and then issue the following commands:

    ./spectrumscale deploy --precheck
    ./spectrumscale deploy

Upgrading an IBM Spectrum Scale cluster

  1. Extract the IBM Spectrum Scale package for the required code level by issuing a command similar to the following depending on the package name:
    ./Spectrum_Scale_Protocols_Standard-4.2.x.x-xxxxx
  2. Copy the cluster definition file from the prior installation to the latest installer location by issuing this command:
    cp -p /usr/lpp/mmfs/Start of change4.2.1.0End of change/installer/configuration/clusterdefinition.txt\
     /usr/lpp/mmfs/Start of change4.2.2.0End of change/installer/configuration/
    Note: This is a command example of when you are upgrading from 4.2.1.0 to 4.2.2.0.
  3. Run the upgrade precheck from the installer directory of the latest code level extraction by issuing commands similar to the following:
    cd /usr/lpp/mmfs/Latest_Code_Level_Directory/installer
    ./spectrumscale upgrade --precheck
    Note: Start of changeIf you are upgrading to IBM Spectrum Scale version 4.2.2, the upgrade precheck updates the operating system and CPU architecture fields in the cluster definition file. You can also update the operating system and CPU architecture fields in the cluster definition file by issuing the spectrumscale config update command.End of change
  4. Run the upgrade by issuing this command:
    cd /usr/lpp/mmfs/Latest_Code_Level_Directory/installer
    ./spectrumscale upgrade

Adding to an installation process

  1. To add nodes to an installation, do the following:
    1. Add one or more node types using the following commands:
      • Client nodes:
        ./spectrumscale node add FQDN
      • NSD nodes:
        ./spectrumscale node add FQDN -n 
      • Protocol nodes:
        ./spectrumscale node add FQDN -p
      • GUI nodes:
        ./spectrumscale node add FQDN -g -a
    2. Install GPFS on the new nodes using the following commands:
      ./spectrumscale install --precheck
      ./spectrumscale install
    3. If protocol nodes are being added, deploy protocols using the following commands:
      ./spectrumscale deploy --precheck
      ./spectrumscale deploy
  2. To add NSDs to an installation, do the following:
    1. Verify that the NSD server connecting this new disk runs an OS compatible with the spectrumscale installation toolkit and that the NSD server exists within the cluster.
    2. Add NSDs to the installation using the following command:
      ./spectrumscale nsd add -p FQDN_of_Primary_NSD_Server Path_to_Disk_Device_File
    3. Run the installation using the following commands:
      ./spectrumscale install --precheck
      ./spectrumscale install
  3. To add file systems to an installation, do the following:
    1. Verify that free NSDs exist and that they can be listed by the spectrumscale installation toolkit using the following commands.
      mmlsnsd
      ./spectrumscale nsd list
    2. Define the file system using the following command:
      ./spectrumscale nsd modify NSD -fs File_System_Name
    3. Deploy the file system using the following commands:
      ./spectrumscale deploy --precheck
      ./spectrumscale deploy
  4. To enable another protocol on an existing cluster that has protocols enabled, do the following steps depending on your configuration:
    1. Enable NFS on all protocol nodes using the following command:
      ./spectrumscale enable nfs
    2. Enable SMB on all protocol nodes using the following command:
      ./spectrumscale enable smb
    3. Enable Object on all protocol nodes using the following commands:
      ./spectrumscale enable object
      ./spectrumscale config object -au Admin_User -ap Admin_Password -dp Database_Password
      ./spectrumscale config object -e FQDN
      ./spectrumscale config object -f FS_Name -m FS_Mountpoint
      ./spectrumscale config object -o Object_Fileset 
    4. Enable the new protocol using the following commands:
      ./spectrumscale deploy --precheck
      ./spectrumscale deploy

Adding protocols to a cluster containing ESS

For information on preparing a cluster that contains ESS for deploying protocols, see Preparing a cluster that contains ESS for adding protocols.

After you have prepared your cluster that contains ESS for adding protocols, you can use commands similar to the ones listed in the Deploying protocols on an existing cluster section.

Diagnosing an error during install, deploy, or upgrade

  1. Note the screen output indicating the error. This helps in narrowing down the general failure.

    When a failure occurs, the screen output also shows the log file containing the failure.

  2. Open the log file in an editor such as vi.
  3. Go to the end of the log file and search upwards for the text FATAL.
  4. Find the topmost occurrence of FATAL (or first FATAL error that occurred) and look above and below this error for further indications of the failure.

For more information, see Finding deployment related error messages more easily and using them for failure analysis.

Location

/usr/lpp/mmfs/4.2.2.0/installer