Adding new recovery group into the existing IBM Storage Scale Erasure Code Edition cluster
The newly added servers must meet the IBM Storage Scale Erasure Code Edition hardware requirements.
For more information, see IBM Storage Scale Erasure Code Edition hardware requirements.
Note: Any new hardware (especially disks) used for below section must be tested before put into
production to make sure the hardware quality.
Use the following steps to add a new recovery group into the existing IBM Storage Scale Erasure Code Edition cluster.
- From IBM® Fix Central, download the IBM Storage Scale Advanced Edition 5.x.y.z installation package. The version of this package must match with the version of IBM Storage Scale Erasure Code Edition cluster. You must download this package to the node that you plan to use as your installer node for the IBM Storage Scale Advanced Edition installation and the subsequent IBM Storage Scale Erasure Code Edition installation. Also, use a node that you plan to add in the existing IBM Storage Scale Erasure Code Edition cluster.
- Extract the IBM Storage Scale Advanced
Edition 5.x.y.z
installation package to the default directory or a directory of your choice on the node that you
plan to use as the installer node.
/DirectoryPathToDownloadedCode/Spectrum_Scale_Advanced-5.x.y.z-x86_64-Linux-install --text-only
- Change the directory to the default directory for the installation toolkit.
# cd /usr/lpp/mmfs/5.x.y.z/ansible-toolkit/
- Set up the installer node by using the following command:
# ./spectrumscale setup -s 192.0.2.6 -st ece
Note: In this command example,192.0.2.6
is the IP address of the scale-out node that is planned to be designated as the installer node. - Issue the config populate command to populate the existing IBM Storage Scale Advanced
Edition cluster configuration.
# ./spectrumscale config populate ece-node2
In this command example, ece-node2 is the IBM Storage Scale Advanced Edition recovery group server.
- Add the IBM Storage Scale Advanced
Edition candidate
nodes.
# ./spectrumscale node add 192.0.2.6 -so # ./spectrumscale node add 192.0.2.7 -so # ./spectrumscale node add 192.0.2.8 -so # ./spectrumscale node add 192.0.2.9 -so
- Verify the nodes.
# ./spectrumscale node list [ INFO ] List of nodes in current configuration: [ INFO ] [Installer Node] [ INFO ] 192.0.2.6 [ INFO ] [ INFO ] [Cluster Details] [ INFO ] Name: ece-node8 [ INFO ] Setup Type: Erasure Code Edition [ INFO ] [ INFO ] [Extended Features] [ INFO ] File Audit logging : Disabled [ INFO ] Watch folder : Disabled [ INFO ] Management GUI : Disabled [ INFO ] Performance Monitoring : Enabled [ INFO ] Callhome : Disabled [ INFO ] [ INFO ] GPFS Admin Quorum Manager NSD Protocol Perf Mon Scale-out OS Arch [ INFO ] Node Node Node Node Server Node Collector Node [ INFO ] ece-node1 X X rhel7 x86_64 [ INFO ] ece-node2 X X X X rhel7 x86_64 [ INFO ] ece-node3 X X X rhel7 x86_64 [ INFO ] ece-node4 X X rhel7 x86_64 [ INFO ] ece-node5 X X rhel7 x86_64 [ INFO ] ece-node6 X rhel7 x86_64 [ INFO ] ece-node7 X rhel7 x86_64 [ INFO ] ece-node8 X rhel7 x86_64 [ INFO ] ece-node9 X rhel7 x86_64 [ INFO ] [ INFO ] [Export IP address] [ INFO ] No export IP addresses configured
- Perform an installation pre-check.
# ./spectrumscale install -pr
- Run the installation procedure.
# ./spectrumscale install
- If needed, change the quorum node after the new nodes are added in.
# mmchnode --nonquorum -N ece-node2
# mmchnode --quorum -N ece-node6
- Verify the disk slot location of the new server. For more information, see Mapping NVMe disk slot location and Mapping LMR disk location.
- Define the new recovery group with the newly added
nodes.
# ./spectrumscale recoverygroup define -N ece-node6,ece-node7,ece-node8,ece-node9
You can verify the defined recovery groups by running the following command:# ./spectrumscale recoverygroup list
- Run the installation procedure again.
# ./spectrumscale install
- Check new recovery group information.
# mmvdisk rg list
- Define a new vdiskset by using one of the following methods:
- Copy existing RG's vdiskset configuration to the new
vdisks.
# mmvdisk vdiskset define --vs VS02 --copy VS01 --rg rg_2
In this command example, VS01 is the existing vdiskset.
- Define it
specifically.
# mmvdisk vs define --vs VS02 --rg rg_2 --code 8+2p --bs 8M --da DA1 --set-size 90%
- Define it by specifying a new data
pool.
# mmvdisk vs define --vs VS02 --rg rg_2 --code 8+2p --bs 8M --da DA1 --set-size 90% --nsd-usage dataonly --sp data2
- Copy existing RG's vdiskset configuration to the new
vdisks.
- Create new vdisks.
# mmvdisk vs create --vs all
- For file system operations, do either of the following steps:
- Create a file
system.
# mmvdisk fs create --fs gpfs2 --vs VS02
- Add vdiskset into the file
system.
# mmvdisk fs add --fs gpfs1 --vs VS02
- Create a file
system.
- If needed, restripe the file system. Note: For more information about adding vdiskset into the existing file system, see Modifying file system attributes, Restriping a GPFS file system, and Changing GPFS disk parameters topics in the IBM Storage Scale: Administration Guide.