Adding nodes to the Elastic Storage Server (ESS) cluster using the installation toolkit
In the 2nd phase of incorporating IBM Storage Scale Erasure Code Edition in an ESS cluster, use the installation toolkit to create a generic cluster definition file that will be used to install Erasure Code Edition candidate nodes in the ESS cluster as generic IBM Storage Scale nodes.
Note: The steps in this phase need to be done on Erasure Code Edition
candidate nodes, not on the ESS nodes.
- From IBM® Fix Central, download the IBM Storage Scale Advanced Edition 5.x.y.z installation package. You must download this package to the node that you plan to use as your installer node for the IBM Storage Scale Advanced Edition installation and the subsequent IBM Storage Scale Erasure Code Edition installation. Also, use a node that you plan to add in the existing ESS cluster.
- Extract the IBM Storage Scale Advanced
Edition 5.x.y.z
installation package to the default directory or a directory of your choice on the node that you
plan to use as the installer node.
/DirectoryPathToDownloadedCode/Spectrum_Scale_Advanced-5.x.y.z-x86_64-Linux-install
- Change the directory to the default directory for the installation toolkit.
# cd /usr/lpp/mmfs/5.x.y.z/ansible-toolkit
- Set up the installer node and the setup type as
ess
.In this command example, 198.51.100.1 is the IP address of the scale-out node that is planned to be designated as the installer node.
# ./spectrumscale setup -s 198.51.100.1 -st ess [ INFO ] Installing prerequisites for install node [ INFO ] Found existing Ansible installation on system. [ INFO ] Install Toolkit setup type is set to ESS. This mode will allow the EMS node to execute Install Toolkit commands. [ INFO ] Your Ansible control node has been configured to use the IP 198.51.100.1 to communicate with other nodes. [ INFO ] Port 10080 will be used for package distribution. [ INFO ] SUCCESS [ INFO ] Tip : Designate an EMS node as admin node: ./spectrumscale node add <node> ‐a [ INFO ] Tip : After designating an EMS node, add nodes for the toolkit to act upon: ./spectrumscale node add <node> -p -n [ INFO ] Tip : After designating the EMS node, if you want to populate the cluster definition file with the current configuration, you can run: ./spectrumscale config populate -N <ems_node>
- Add the existing EMS node to the cluster definition as admin, quorum, and EMS
nodes.
# ./spectrumscale node add ess.example.com -a -q -e
[ INFO ] Adding node ess.example.com as a GPFS node. [ INFO ] Adding node ess.example.com as a quorum node. [ INFO ] Setting ess.example.com as an admin node. [ INFO ] Configuration updated. [ INFO ] Setting ess.example.com as an ESS node. [ INFO ] Configuration updated. # ./spectrumscale node list [ INFO ] List of nodes in current configuration: [ INFO ] [Installer Node] [ INFO ] 198.51.100.1 [ INFO ] [ INFO ] [Cluster Details] [ INFO ] Name: scalecluster.example.com [ INFO ] Setup Type: ESS [ INFO ] [ INFO ] [Extended Features] [ INFO ] File Audit logging : Disabled [ INFO ] Watch folder : Disabled [ INFO ] Management GUI : Enabled [ INFO ] Performance Monitoring : Disabled [ INFO ] Callhome : Disabled [ INFO ] [ INFO ] GPFS Admin Quorum Manager NSD Protocol GUI Perf Mon EMS OS Arch [ INFO ] Node Node Node Node Server Node Server Collector [ INFO ] ess.example.com X X X rhel7 ppc64le [ INFO ] [ INFO ] [Export IP address] [ INFO ] No export IP addresses configured
- Add
IBM Storage Scale Erasure Code Edition candidate nodes
generically.
# ./spectrumscale node add 198.51.100.1 [ INFO ] Adding node node1.example.com as a GPFS node. # ./spectrumscale node add 198.51.100.2 [ INFO ] Adding node node2.example.com as a GPFS node. # ./spectrumscale node add 198.51.100.3 [ INFO ] Adding node node3.example.com as a GPFS node. # ./spectrumscale node add 198.51.100.4 [ INFO ] Adding node node4.example.com as a GPFS node. # ./spectrumscale node add 198.51.100.5 [ INFO ] Adding node node5.example.com as a GPFS node. # ./spectrumscale node add 198.51.100.6 [ INFO ] Adding node node6.example.com as a GPFS node.
Verify the node details.# ./spectrumscale node list [ INFO ] List of nodes in current configuration: [ INFO ] [Installer Node] [ INFO ] 198.51.100.1 [ INFO ] [ INFO ] [Cluster Details] [ INFO ] Name: scalecluster.example.com [ INFO ] Setup Type: ESS [ INFO ] [ INFO ] [Extended Features] [ INFO ] File Audit logging : Disabled [ INFO ] Watch folder : Disabled [ INFO ] Management GUI : Enabled [ INFO ] Performance Monitoring : Enabled [ INFO ] Callhome : Disabled [ INFO ] [ INFO ] GPFS Admin Quorum Manager NSD Protocol GUI Perf Mon EMS OS Arch [ INFO ] Node Node Node Node Server Node Server Collector [ INFO ] ess.example.com X X X rhel7 ppc64le [ INFO ] node1.example.com rhel7 x86_64 [ INFO ] node2.example.com rhel7 x86_64 [ INFO ] node3.example.com rhel7 x86_64 [ INFO ] node4.example.com rhel7 x86_64 [ INFO ] node5.example.com rhel7 x86_64 [ INFO ] node6.example.com rhel7 x86_64 [ INFO ] [Export IP address] [ INFO ] No export IP addresses configured
- Perform an installation precheck by using the installation toolkit.
# ./spectrumscale install -pr [ INFO ] Logging to file: /usr/lpp/mmfs/5.x.y.z/ansible-toolkit/logs/INSTALL‐PRECHECK‐02‐02‐ 2021_13:17:42.log [ INFO ] Validating configuration [ WARN ] No NSD servers specified. The install toolkit will continue without creating any NSDs. If you still want to continue, please ignore this warning. Otherwise, for information on adding a node as an NSD server, see: 'http://www.ibm.com/support/knowledgecenter/STXKQY_5.0.3/com.ibm.spectrum.scale.v 5r03.doc/bl1ins_configuringgpfs.htm' [ INFO ] Performing GPFS checks. [ INFO ] Running environment checks [ WARN ] No manager nodes specified. Assuming managers already configured on ESS.gpfs.net … [ INFO ] Checking pre‐requisites for portability layer. [ INFO ] GPFS precheck OK [ INFO ] Performing Performance Monitoring checks. [ INFO ] Running environment checks for Performance Monitoring [ INFO ] Performing FILE AUDIT LOGGING checks. [ INFO ] Running environment checks for file Audit logging [ INFO ] Network check from admin node node1.example.com to all other nodes in the cluster passed [ WARN ] Ephemeral port range is not set. Please set valid ephemeral port range using the command ./spectrumscale config gpfs --ephemeral_port_range . You may set the default values as 60000‐61000 [ INFO ] The install toolkit will not configure call home as it is disabled. To enable call home, use the following CLI command: ./spectrumscale callhome enable [ INFO ] Pre‐check successful for install. [ INFO ] Tip : ./spectrumscale install
- Install the nodes that are defined in the cluster definition by using the installation
toolkit.
# ./spectrumscale install [ INFO ] Logging to file: /usr/lpp/mmfs/5.x.y.z/ansible-toolkit/logs/INSTALL‐02‐02‐ 2021_18:18:29.log [ INFO ] Validating configuration [ WARN ] No NSD servers specified. The install toolkit will continue without creating any NSDs. If you still want to continue, please ignore this warning. Otherwise, for information on adding a node as an NSD server, see: 'http://www.ibm.com/support/knowledgecenter/STXKQY_5.0.3/com.ibm.spectrum.scale.v 5r03.doc/bl1ins_configuringgpfs.htm' [ INFO ] Running pre‐install checks [ INFO ] Running environment checks [ INFO ] The following nodes will be added to cluster scalecluster.example.com: node1‐ .example.com, node2.example.com, node3.example.com, node4.example.com, node5.example.com, node6.example.com, ess.example.com, [ WARN ] No manager nodes specified. Assuming managers already configured on ESS.gpfs.net. … … … … [ INFO ] Checking for a successful install [ INFO ] Checking state of GPFS [ INFO ] GPFS callhome has been successfully installed. To configure callhome run 'mmcallhome ‐h' on one of your nodes. [ INFO ] Checking state of GPFS on all nodes [ INFO ] GPFS active on all nodes [ INFO ] GPFS ACTIVE [ INFO ] Checking state of Performance Monitoring [ INFO ] Running Performance Monitoring post‐install checks [ WARN ] Historical performance data is still kept on: node1.example.com in the '/opt/IBM/zimon/data' directory. For documentation on migrating the data to the new Performance Monitoring collectors: refer to the IBM Spectrum Scale Knowledge Center. [ INFO ] pmcollector running on all nodes [ INFO ] pmsensors running on all nodes [ INFO ] Performance Monitoring ACTIVE [ INFO ] SUCCESS [ INFO ] All services running [ INFO ] StanzaFile and NodeDesc file for NSD, filesystem, and cluster setup have been saved to /usr/lpp/mmfs folder on node: ess.example.com [ INFO ] Installation successful. 7 GPFS nodes active in cluster scalecluster.example.com. Completed in 6 minutes 6 seconds. [ INFO ] Tip :If all node designations and any required protocol configurations are complete, proceed to check the deploy configuration:./spectrumscale deploy --precheck
- Verify that the installation completed successfully by issuing the following
command.
# ./spectrumscale install -po [ INFO ] Logging to file: /usr/lpp/mmfs/5.x.x.x/installer/logs/INSTALL‐POSTCHECK‐06‐08‐ 2019_13:25:31.log [ WARN ] No NSD servers specified. The install toolkit will continue without creating any NSDs. If you still want to continue, please ignore this warning. Otherwise, for information on adding a node as an NSD server, see: 'http://www.ibm.com/support/knowledgecenter/STXKQY_5.0.3/com.ibm.spectrum.scale.v 5r03.doc/bl1ins_configuringgpfs.htm' [ INFO ] Checking state of GPFS [ INFO ] GPFS callhome has been successfully installed. To configure callhome run 'mmcallhome ‐h' on one of your nodes. [ INFO ] Checking state of GPFS on all nodes [ INFO ] GPFS active on all nodes [ INFO ] GPFS ACTIVE [ INFO ] Checking state of Performance Monitoring [ INFO ] Running Performance Monitoring post‐install checks [ WARN ] Historical performance data is still kept on: ess.example.com in the '/opt/IBM/zimon/data' directory. For documentation on migrating the data to the new Performance Monitoring collectors: refer to the IBM Spectrum Scale Knowledge Center. [ INFO ] pmcollector running on all nodes [ INFO ] pmsensors running on all nodes
Before you proceed to the next phase, remove the Advanced Edition license package from
IBM Storage Scale Erasure Code Edition candidate nodes by using the following
command.
mmdsh -N ListofECECandidateNodes "rpm -e gpfs.license.adv"