Defining the cluster topology for the installation toolkit
Use these instructions to set up the cluster definition file prior to installing GPFS and deploying protocols.
- Adding node definitions to the cluster definition file
- Adding and configuring NSD server nodes in the cluster definition file
- Defining file systems
Adding node definitions to the cluster definition file
GPFS node definitions are added to the cluster definition file through the spectrumscale node add command.
- If the install toolkit is being used from a location outside of any of the nodes to be
installed, a GPFS Admin Node is required. The Admin Node is
used to run GPFS cluster-wide commands. To specify a GPFS Admin Node in the cluster definition file, use the -a argument:
If no GPFS Admin Node is specified in the cluster definition file, the node the install toolkit is running on is automatically designated as the Admin Node. If GUI nodes are to be installed, each GUI node must also be marked as an Admin node../spectrumscale node add gpfsnode1 -a
The role of an Admin node with regards to the installation toolkit is to serve as the coordinator of the installation, deployment, and upgrade. This node also acts as a central repository for all IBM Spectrum Scale packages. For larger clusters, it is important to have an Admin node with plenty of network bandwidth to all other nodes in the cluster.
- To add GPFS Client nodes to the cluster definition file, provide no
arguments:
./spectrumscale node add gpfsnode1
- To add GPFS manager nodes to the cluster definition file, use the -m
argument:
./spectrumscale node add gpfsnode2 -m
If no manager nodes are added to the cluster definition, the installation toolkit automatically designates manager nodes using the following algorithm:- First, all protocol nodes in the cluster definition are designated as manager nodes.
- If there are no protocol nodes, all NSD nodes in the cluster definition are designated as manager nodes.
- If there are no NSD nodes, all nodes in the cluster definition are designated as manager nodes.
GPFS quorum nodes are added to the cluster definition using the -q argument.
./spectrumscale node add gpfsnode3 -q
If no quorum nodes are added to the cluster definition, the installation toolkit automatically designates quorum nodes using the following algorithm:- If the number of nodes in the cluster definition is less than 4, all nodes are designated as quorum nodes.
- If the number of nodes in the cluster definition is between 4 and 9 inclusive, 3 nodes are designated as quorum nodes.
- If the number of nodes in the cluster definition is between 10 and 18 inclusive, 5 nodes are designated as quorum nodes.
- If the number of nodes in the cluster definition is greater than 18, 7 nodes are designated as quorum nodes.
This algorithm preferentially selects NSD nodes as quorum nodes. If the number of NSD nodes is less than the number of quorum nodes to be designated then any other nodes are selected until the number of quorum nodes is satisfied.
- GPFS NSD servers are added to the cluster definition using
the -n
argument.
./spectrumscale node add gpfsnode4 -n
- GPFS Graphical User Interface servers are added to the
cluster definition using the -g
argument.
./spectrumscale node add gpfsnode3 -g
A GUI server must also be an admin node. Use the -a flag:./spectrumscale node add gpfsnode3 -a
If no nodes have been specified as management GUI servers, then the GUI is not installed. It is recommended to have at least 2 management GUI interface servers and a maximum of 3 for redundancy.
- To display a list of all nodes in the cluster definition file, use the ./spectrumscale node list command. For
example:
./spectrumscale node list
[ INFO ] List of nodes in current configuration: [ INFO ] [Installer Node] [ INFO ] 9.168.100.1 [ INFO ] [ INFO ] [Cluster Name] [ INFO ] gpfscluster01 [ INFO ] [ INFO ] GPFS Node Admin Quorum Manager NSD Server Protocol GUI Server OS Arch [ INFO ] gpfsnode1 X X X rhel7 x86_64 [ INFO ] gpfsnode2 X rhel7 x86_64 [ INFO ] gpfsnode3 X X X rhel7 x86_64 [ INFO ] gpfsnode4 X X X rhel7 x86_64
Starting with IBM Spectrum Scale release 4.2.2, the output of the spectrumscale node list command also includes CPU architecture and operating system of the nodes.
If you are upgrading to IBM Spectrum Scale release 4.2.2 or later, ensure that the operating system and architecture fields in the cluster definition file are updated. These fields are automatically updated during the upgrade precheck. You can also update the operating system and CPU architecture fields in the cluster definition file by issuing the spectrumscale config update command. For more information, see Upgrading IBM Spectrum Scale components with the installation toolkit.
- If you want to enable and configure call home by using the
installation toolkit, you must add one of the nodes as the call home node as
follows.
./spectrumscale node add -c CallHomeNode
- If you want to use the installation toolkit to install GPFS™ and deploy protocols in an Elastic Storage Server (ESS) cluster, you must add the EMS node of the ESS system as follows.
For more information, see ESS awareness with the installation toolkit../spectrumscale node add -e EMSNode
- For information on adding nodes to an existing installation, see Adding nodes, NSDs, or file systems to an existing installation.
Adding and configuring NSD server nodes in the cluster definition file
- To configure NSDs, you must first add your NSD server nodes to the
configuration:
./spectrumscale node add -n nsdserver1 ./spectrumscale node add -n nsdserver2 ./spectrumscale node add -n nsdserver3
- Once NSD server nodes are in the configuration, add NSDs to the
configuration:
./spectrumscale nsd add /dev/sdb -p nsdserver1 -s nsdserver2,nsdserver3,...
- The installation toolkit supports standalone NSDs which connect to a single NSD server or shared NSDs which connect to both a primary and a secondary NSD server.
- When adding a standalone NSD, skip the secondary NSD server parameter.
- When adding a shared NSD, it is important to know the device name on the node which is to become
the primary NSD server. It is not necessary to know the device name on the secondary NSD server
because the device is looked up using its UUID.Note: Although it is not necessary to know the device name on the secondary NSD server, it may be helpful to create a consistent mapping of device names if you are using multipath. For more information, see NSD disk discovery.
Here is an example of adding a shared NSD to the configuration by specifying the device name on the primary server along with the primary and secondary servers.
- The name of the NSD is automatically generated based on the NSD server names. This can be
changed after the NSD has been added by using the modify command and specifying a new name with the
-n flag; the new name must be
unique:
./spectrumscale nsd modify nsd_old_name -n nsd_new_name
- It is possible to view all NSDs currently in the configuration using the list
command:
./spectrumscale nsd list
- To remove a single NSD from the configuration, supply the name of the NSD to the delete
command:
./spectrumscale nsd delete nsd_name
- To clear all NSDs and start from scratch, use the clear
command:
./spectrumscale nsd clear
- Where multiple devices are connected to the same pair of NSD servers, they can be added in bulk
either by providing a list of all devices, or by using wild
cards:
or./spectrumscale nsd add -p nsdserver1 -s nsdserver2 /dev/dm-1 /dev/dm-2 /dev/dm-3
./spectrumscale nsd add -p nsdserver1 -s nsdserver2 "/dev/dm-*"
A connection is made to the primary server to expand any wild cards and check that all devices are present. When using wild cards, it is important to ensure that they are properly escaped, as otherwise they may be expanded locally by your shell. If any devices listed cannot be located on the primary server, a warning is displayed, but the command continues to add all other NSDs.
- When adding NSDs, it is good practice to have them distributed such that each pair of NSD
servers is equally loaded. This is usually done by using one server as a primary for half of the
NSDs, and the other server as primary for the remainder. To simplify this process, it is possible to
add all NSDs at once, then later use the balance command to switch the primary and secondary servers
on some of the NSDs, as required. A connection is made to the original secondary server to look up
device names on that node automatically. To automatically balance a pair of NSD servers, you must
specify one of the nodes in that
pair:
./spectrumscale nsd add "/dev/dm-*" -p serverA -s serverB
[ INFO ] Connecting to serverA to check devices and expand wildcards. [ INFO ] Adding NSD serverA_serverB_1 on serverA using device /dev/dm-0. [ INFO ] Adding NSD serverA_serverB_2 on serverA using device /dev/dm-1. $ ./spectrumscale nsd list [ INFO ] Name FS Size(GB) Usage FG Pool Device Servers [ INFO ] serverA_serverB_1 Default 13 Default 1 Default /dev/dm-0 serverA,serverB [ INFO ] serverA_serverB_2 Default 1 Default 1 Default /dev/dm-1 serverA,serverB ./spectrumscale nsd balance --node serverA ./spectrumscale nsd list
[ INFO ] Name FS Size(GB) Usage FG Pool Device Servers [ INFO ] serverA_serverB_1 Default 13 Default 1 Default /dev/dm-0 serverB,serverA [ INFO ] serverA_serverB_2 Default 1 Default 1 Default /dev/dm-1 serverA,serverB
- Ordinarily a connection is made to the primary NSD server when adding an NSD. This is done to
check device names and so that details such as the disk size can be determined, but is not vital. If
it is not feasible to have a connection to the nodes while adding NSDs to the configuration, these
connections can be disabled using the --no-check flag. Extra care is
needed to manually check the configuration when using this
flag.
./spectrumscale nsd add /dev/sda -p nsdserver1 -s nsdserver2 --no-check
- You can set the failure group, file system, pool, and usage of an NSD in two ways:
- using the add command to set them for multiple new NSDs at once
- using the modify command to modify one NSD at a time
./spectrumscale nsd add "/dev/dm-*" -p nsdserver1 -s nsdserver2 \ -po pool1 -u dataOnly -fg 1 -fs filesystem_1 ./spectrumscale nsd modify nsd_name -u metadataOnly -fs filesystem_1
For information on adding NSDs to an existing installation, see Adding nodes, NSDs, or file systems to an existing installation.
Defining file systems
File systems are defined implicitly with the NSD configuration and they are only created at the time of deployment, if there are NSDs assigned to them.
- To specify a file system, use the nsd add or nsd modify
command to set the file system property of the
NSD:
./spectrumscale nsd add "/dev/dm-*" -p server1 -s server2 -fs filesystem_1 [ INFO ] The installer will create the new file system filesystem_1 if it does not already exist.
./spectrumscale nsd modify server1_server2_1 -fs filesystem_2 [ INFO ] The installer will create the new file system filesystem_2 if it does not already exist.
- To list all file systems that currently have NSDs assigned to them, use the
list command. This also displays file system properties including the block size
and mount
point:
./spectrumscale filesystem list [ INFO ] Name BlockSize Mountpoint NSDs Assigned [ INFO ] filesystem_1 Default /ibm/filesystem_1 3 [ INFO ] filesystem_2 Default /ibm/filesystem_2 1
- To alter the block size and mount point from their default values, use the modify
command:
./spectrumscale filesystem modify filesystem_1 -B 1M -m /gpfs/gpfs0
Important: NSDs are created when the ./spectrumscale install command is issued. The file system is created when the ./spectrumscale deploy command is issued.It is not possible to directly rename or delete a file system; this is instead done by reassigning the NSDs to a different file system using the nsd modify command.
- Nodes and node types defined
- NSDs optionally defined
- File systems optionally defined
To proceed with the GPFS installation, go to the next step: Installing GPFS and creating a GPFS cluster.
For information on adding file systems to an existing installation, see Adding nodes, NSDs, or file systems to an existing installation.