Completing the IBM Storage Scale Erasure Code Edition configuration with mmvdisk commands
In the fourth phase of incorporating IBM Storage Scale Erasure Code Edition in an ESS cluster, use mmvdisk commands from any IBM Storage Scale Erasure Code Edition mmvdisk enabled node in the cluster to complete the configuration of IBM Storage Scale Erasure Code Edition cluster.
-
Create
IBM Storage Scale Erasure Code Edition node class from the
candidate scale-out nodes that you deployed earlier.
# mmvdisk nc create --node-class ece_nc1 -N node1,node2,node3,node4,node5,node6 mmvdisk: Node class 'ece_nc1' created.
- Configure
IBM Storage Scale Erasure Code Edition node class and
restart GPFS.
# mmvdisk server configure --node-class ece_nc1 --recycle one mmvdisk: Checking resources for specified nodes. mmvdisk: Node class 'ece_nc1' has a scale-out recovery group disk topology. mmvdisk: Using 'default.scale-out' RG configuration for topology 'ECE 2 HDD'. mmvdisk: Setting configuration for node class 'ece_nc1'. mmvdisk: Node class 'ece_nc1' is now configured to be recovery group servers. mmvdisk: Restarting GPFS daemon on node 'node1'. mmvdisk: Restarting GPFS daemon on node 'node2'. mmvdisk: Restarting GPFS daemon on node 'node4'. mmvdisk: Restarting GPFS daemon on node 'node3'. mmvdisk: Restarting GPFS daemon on node 'node6'. mmvdisk: Restarting GPFS daemon on node 'node5'.
Note: The--recycle one
option restarts GPFS to enable new configuration one by one. Be careful when you use the--recycle all
option. When you use this option, the mmvdisk command asks the following confirmation on the console:# mmvdisk server configure --update --nc nc_1 --recycle all mmvdisk: This command will shutdown GPFS on multiple nodes at the same time. mmvdisk: It is possible to lose quorum and cluster availability. mmvdisk: It is possible to lose file system or recovery group availability. mmvdisk: Do you wish to continue (yes or no)?
If you provide "yes", the command restarts all the nodes at the same time, which might cause the cluster to lose quorum or file system availability.
Verify the node class details.# mmvdisk nc list node class recovery groups ‐‐‐‐‐‐‐‐‐‐ ‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐ ece_nc1 ‐ ess_nc1 rg_gssio1‐ib, rg_gssio2‐ib
- Configure and create the recovery group.
# mmvdisk rg create --rg ece_rg1 --nc ece_nc1 mmvdisk: Checking node class configuration. mmvdisk: Checking daemon status on node 'node1.example.com'. mmvdisk: Checking daemon status on node 'node4.example.com'. mmvdisk: Checking daemon status on node 'node5.example.com'. mmvdisk: Checking daemon status on node 'node6.example.com'. mmvdisk: Checking daemon status on node 'node3.example.com'. mmvdisk: Checking daemon status on node 'node2.example.com'. mmvdisk: Analyzing disk topology for node 'node1.example.com'. mmvdisk: Analyzing disk topology for node 'node4.example.com'. mmvdisk: Analyzing disk topology for node 'node5.example.com'. mmvdisk: Analyzing disk topology for node 'node6.example.com'. mmvdisk: Analyzing disk topology for node 'node3.example.com'. mmvdisk: Analyzing disk topology for node 'node2.example.com'. mmvdisk: Creating recovery group 'ece_rg1'. mmvdisk: Formatting log vdisks for recovery group. mmvdisk: (mmcrvdisk) [I] Processing vdisk RG003ROOTLOGHOME mmvdisk: (mmcrvdisk) [I] Processing vdisk RG003LG001LOGHOME mmvdisk: (mmcrvdisk) [I] Processing vdisk RG003LG002LOGHOME mmvdisk: (mmcrvdisk) [I] Processing vdisk RG003LG003LOGHOME mmvdisk: (mmcrvdisk) [I] Processing vdisk RG003LG004LOGHOME mmvdisk: (mmcrvdisk) [I] Processing vdisk RG003LG005LOGHOME mmvdisk: (mmcrvdisk) [I] Processing vdisk RG003LG006LOGHOME mmvdisk: (mmcrvdisk) [I] Processing vdisk RG003LG007LOGHOME mmvdisk: (mmcrvdisk) [I] Processing vdisk RG003LG008LOGHOME mmvdisk: (mmcrvdisk) [I] Processing vdisk RG003LG009LOGHOME mmvdisk: (mmcrvdisk) [I] Processing vdisk RG003LG010LOGHOME mmvdisk: (mmcrvdisk) [I] Processing vdisk RG003LG011LOGHOME mmvdisk: (mmcrvdisk) [I] Processing vdisk RG003LG012LOGHOME mmvdisk: Created recovery group 'ece_rg1'.
Verify the recovery group details.# mmvdisk rg list needs user recovery group active current or master server service vdisks remarks -------------- ------ ------------------------ ------- ------ ------ ece_rg1 yes node1.example.com no 0 rg_gssio1‐ib yes gssio1‐ib.example.com no 1 rg_gssio2‐ib yes gssio2‐ib.example.com no 1
- Define the vdisk sets with the desired parameters.
- In this command example, IBM Storage Scale Erasure Code Edition vdisk set is defined as a dataOnly storage pool that is separate from the existing ESS pool. The ESS pool in this case is the system pool and it is defined as dataAndMetadata.
- Make sure you use the same block size (16 M in this case) as the existing ESS file system if you are merging this vdisk set into that file system.
# mmvdisk vs define --vs ece_vs1 --rg ece_rg1 --code 8+2p --block-size 16M --set-size 80% --storage-pool ece_pool_1 --nsd-usage dataOnly mmvdisk: Vdisk set 'ece_vs1' has been defined. mmvdisk: Recovery group 'ece_rg1' has been defined in vdisk set 'ece_vs1'. member vdisks vdisk set count size raw size created file system and attributes -------------- ----- -------- -------- ------- -------------------------- ece_vs1 12 62 GiB 80 GiB no -, DA1, 8+2p, 16 MiB, dataOnly, ece_pool_1 declustered capacity all vdisk sets defined recovery group array type total raw free raw free% in the declustered array -------------- ----------- ---- --------- -------- ----- ------------------------ ece_rg1 DA1 HDD 1213 GiB 253 GiB 20% ece_vs1 vdisk set map memory per server node class available required required per vdisk set ---------- --------- -------- ---------------------- ece_nc1 8996 MiB 390 MiB ece_vs1 (2304 KiB)
- Create vdisks, NSDs, and the vdisk set from the
defined storage.
# mmvdisk vs create --vs ece_vs1 mmvdisk: 12 vdisks and 12 NSDs will be created in vdisk set 'ece_vs1'. mmvdisk: (mmcrvdisk) [I] Processing vdisk RG003LG001VS003 mmvdisk: (mmcrvdisk) [I] Processing vdisk RG003LG002VS003 mmvdisk: (mmcrvdisk) [I] Processing vdisk RG003LG003VS003 mmvdisk: (mmcrvdisk) [I] Processing vdisk RG003LG004VS003 mmvdisk: (mmcrvdisk) [I] Processing vdisk RG003LG005VS003 mmvdisk: (mmcrvdisk) [I] Processing vdisk RG003LG006VS003 mmvdisk: (mmcrvdisk) [I] Processing vdisk RG003LG007VS003 mmvdisk: (mmcrvdisk) [I] Processing vdisk RG003LG008VS003 mmvdisk: (mmcrvdisk) [I] Processing vdisk RG003LG009VS003 mmvdisk: (mmcrvdisk) [I] Processing vdisk RG003LG010VS003 mmvdisk: (mmcrvdisk) [I] Processing vdisk RG003LG011VS003 mmvdisk: (mmcrvdisk) [I] Processing vdisk RG003LG012VS003 mmvdisk: Created all vdisks in vdisk set 'ece_vs1'. mmvdisk: (mmcrnsd) Processing disk RG003LG001VS003 mmvdisk: (mmcrnsd) Processing disk RG003LG002VS003 mmvdisk: (mmcrnsd) Processing disk RG003LG003VS003 mmvdisk: (mmcrnsd) Processing disk RG003LG004VS003 mmvdisk: (mmcrnsd) Processing disk RG003LG005VS003 mmvdisk: (mmcrnsd) Processing disk RG003LG006VS003 mmvdisk: (mmcrnsd) Processing disk RG003LG007VS003 mmvdisk: (mmcrnsd) Processing disk RG003LG008VS003 mmvdisk: (mmcrnsd) Processing disk RG003LG009VS003 mmvdisk: (mmcrnsd) Processing disk RG003LG010VS003 mmvdisk: (mmcrnsd) Processing disk RG003LG011VS003 mmvdisk: (mmcrnsd) Processing disk RG003LG012VS003 mmvdisk: Created all NSDs in vdisk set 'ece_vs1'.
- From any mmvdisk enabled node in the cluster, add the new
vdisk set to the existing file system.
# mmvdisk fs add --fs ecefs1 --vs ece_vs1 mmvdisk: Creating file system 'ecefs1'. mmvdisk: The following disks of ecefs1 will be formatted on node gssio2.example.com: mmvdisk: RG003LG001VS003: size 64000 MB mmvdisk: RG003LG002VS003: size 64000 MB mmvdisk: RG003LG003VS003: size 64000 MB mmvdisk: RG003LG004VS003: size 64000 MB mmvdisk: RG003LG005VS003: size 64000 MB mmvdisk: RG003LG006VS003: size 64000 MB mmvdisk: RG003LG007VS003: size 64000 MB mmvdisk: RG003LG008VS003: size 64000 MB mmvdisk: RG003LG009VS003: size 64000 MB mmvdisk: RG003LG010VS003: size 64000 MB mmvdisk: RG003LG011VS003: size 64000 MB mmvdisk: RG003LG012VS003: size 64000 MB mmvdisk: Extending Allocation Map mmvdisk: Creating Allocation Map for storage pool ece_pool_1 mmvdisk: Flushing Allocation Map for storage pool ece_pool_1 mmvdisk: Disks up to size 966.97 GB can be added to storage pool ece_pool_1. mmvdisk: Checking Allocation Map for storage pool ece_pool_1 mmvdisk: Completed adding disks to file system ecefs1.
- Verify the following entities from any mmvdisk enabled
node.
- File system details:
# mmvdisk fs list file system vdisk sets ----------- ---------- ecefs1 VS001_essFS, VS002_essFS, ece_vs1
Storage pools in the file system# mmlspool ecefs1 Storage pools in file system at '/gpfs/ecefs1': Name Id BlkSize Data Meta Total Data in (KB) Free Data in (KB) Total Meta in (KB) Free Meta in (KB) system 0 16 MB yes yes 12501204992 12496994304 (100%) 12501204992 12497076224 (100%) ece_pool_1 65537 16 MB yes no 786432000 785252352 (100%) 0 0 ( 0%)
- Recovery groups:
# mmvdisk rg list needs user recovery group active current or master server service vdisks remarks -------------- ------- --------------------------------- ------- ------ ------- ece_rg1 yes node1.example.com no 12 rg_gssio1-ib yes gssio1-ib.example.com no 1 rg_gssio2-ib yes gssio2-ib.example.com no 1
- pdisks for the new recovery group
ece_rg1:
# mmvdisk pdisk list --rg ece_rg1 declustered recovery group pdisk array paths capacity free space FRU (type) state -------------- ------------ ----------- ----- -------- ---------- --------------- ----- ece_rg1 n013p001 DA1 1 136 GiB 44 GiB 42D0623 ok ece_rg1 n013p002 DA1 1 136 GiB 44 GiB 42D0422 ok ece_rg1 n014p001 DA1 1 136 GiB 44 GiB 42D0623 ok ece_rg1 n014p002 DA1 1 136 GiB 44 GiB 42D0422 ok ece_rg1 n015p001 DA1 1 136 GiB 44 GiB 42D0623 ok ece_rg1 n015p002 DA1 1 136 GiB 44 GiB 42D0422 ok ece_rg1 n016p001 DA1 1 136 GiB 44 GiB 42D0623 ok ece_rg1 n016p002 DA1 1 136 GiB 44 GiB 42D0422 ok ece_rg1 n017p001 DA1 1 136 GiB 44 GiB 42D0623 ok ece_rg1 n017p002 DA1 1 136 GiB 44 GiB 42D0422 ok ece_rg1 n018p001 DA1 1 136 GiB 44 GiB 22R6802 ok ece_rg1 n018p002 DA1 1 136 GiB 44 GiB 42D0422 ok
- File system details: