Deploying protocols
Use this information to deploy protocols in an IBM Spectrum Scale cluster using the installation toolkit.
Deployment of protocol services is performed on a subset of the cluster nodes that have been designated as protocol nodes using the ./spectrumscale node add node_name -p command. Protocol nodes have an additional set of packages installed that allow them to run the NFS, SMB and Object protocol services.
Data is served through these protocols from a pool of addresses designated as Export IP addresses
or CES public
IP addresses using ./spectrumscale config protocols -e
IP1,IP2,IP3... or added manually using the mmces address add command.
The allocation of addresses in this pool is managed by the cluster, and IP addresses are
automatically migrated to other available protocol nodes in the event of a node failure.
Before deploying protocols, there must be a GPFS cluster that has GPFS started and it has at least one file system for the CES shared root file system.
- All the protocol nodes must be running the supported operating systems, and the protocol nodes
must be all Power® (in big endian mode) or all Intel. Although the other nodes in the cluster could be on other
platforms and operating systems.
For information about supported operating systems for protocol nodes and their required minimum kernel levels, see IBM Spectrum Scale FAQ in IBM® Knowledge Center (www.ibm.com/support/knowledgecenter/STXKQY/gpfsclustersfaq.html)
- The packages for all protocols are installed on every node designated as a protocol node; this is done even if a service is not enabled in your configuration.
- Services are enabled and disabled cluster wide; this means that every protocol node serves all enabled protocols.
- If SMB is enabled, the number of protocol nodes is limited to 16 nodes.
- If your protocol node has Red Hat Enterprise Linux
7.3 installed, there might be an NFS service already running on the node that can cause issues with
the installation of IBM Spectrum
Scale NFS packages. To avoid
these issues, before starting the deployment, you must do the following:
- Stop the NFS service using the systemctl stop nfs.service command.
- Disable the NFS service using the systemctl disable nfs.service command.
This command ensures that this change is persistent after system reboot.
- The installation toolkit does not support adding protocol nodes to an existing ESS cluster prior to ESS version 3.5.
Defining a shared file system for protocols
The ./spectrumscale config protocols command can be used to define the shared file system (-f) and the shared file system mount point or path (-m):
usage: spectrumscale config protocols [-h] [-l] [-f FILESYSTEM]
[-m MOUNTPOINT]
For
example: ./spectrumscale config protocols -f cesshared -m /gpfs/cesshared.To view the current settings, issue this command:
$ ./spectrumscale config protocols --list
[ INFO ] No changes made. Current settings are as follows:
[ INFO ] Shared File System Name is cesshared
[ INFO ] Shared File System Mountpoint or Path is /gpfs/cesshared
Adding protocol nodes to the cluster definition file
To deploy protocols on nodes in your cluster, they must be added to the cluster definition file as protocol nodes.
Issue the following command to designate a node as a protocol node:
./spectrumscale node add NODE_IP -p
Enabling NFS and SMB
To enable or disable a set of protocols, the ./spectrumscale enable and ./spectrumscale disable commands should be used. For example:
./spectrumscale enable smb nfs
[ INFO ] Enabling SMB on all protocol nodes.
[ INFO ] Enabling NFS on all protocol nodes.
The current list of enabled protocols is shown as part of the spectrumscale node list command output; for example:
./spectrumscale node list
[ INFO ] List of nodes in current configuration:
[ INFO ] [Installer Node]
[ INFO ] 9.71.18.169
[ INFO ]
[ INFO ] [Cluster Name]
[ INFO ] ESDev1
[ INFO ]
[ INFO ] [Protocols]
[ INFO ] Object : Disabled
[ INFO ] SMB : Enabled
[ INFO ] NFS : Enabled
[ INFO ]
[ INFO ] GPFS Node Admin Quorum Manager NSD Server Protocol GUI Server OS Arch
[ INFO ] ESDev1-GPFS1 X X X X rhel7 x86_64
[ INFO ] ESDev1-GPFS2 X X rhel7 x86_64
[ INFO ] ESDev1-GPFS3 X X rhel7 x86_64
[ INFO ] ESDev1-GPFS4 X X X X rhel7 x86_64
[ INFO ] ESDev1-GPFS5 X X X X rhel7 x86_64
Starting with IBM Spectrum Scale release 4.2.2, the output of the spectrumscale node list command includes CPU architecture and operating system of the nodes.
If you are upgrading to IBM Spectrum Scale release 4.2.2 or later, ensure that the operating system and architecture fields in the cluster definition file are updated. These fields are automatically updated during the upgrade precheck. You can also update the operating system and CPU architecture fields in the cluster definition file by issuing the spectrumscale config update command. For more information, see Upgrading IBM Spectrum Scale components with the installation toolkit.
Configuring object
If the object protocol is enabled, further protocol-specific configuration is required; these options are configured using the spectrumscale config object command, which has the following parameters:
usage: spectrumscale config object [-h] [-l] [-f FILESYSTEM] [-m MOUNTPOINT]
[-e ENDPOINT] [-o OBJECTBASE]
[-i INODEALLOCATION] [-t ADMINTOKEN]
[-au ADMINUSER] [-ap ADMINPASSWORD]
[-SU SWIFTUSER] [-sp SWIFTPASSWORD]
[-dp DATABASEPASSWORD]
[-mr MULTIREGION] [-rn REGIONNUMBER]
[-s3 {on,off}]
The object protocol requires a dedicated fileset as its back-end storage; this fileset is defined using the --filesystem (-f), --mountpoint(-m) and --objectbase (-o) flags to define the file system, mount point, and fileset respectively.
The --endpoint(-e) option specifies the host name that is used for access to the file store. This should be a round-robin DNS entry that maps to all CES IP addresses; this distributes the load of all keystone and object traffic that is routed to this host name. Therefore, the endpoint is an IP address in a DNS or in a load balancer that maps to a group of export IPs (that is, CES IPs that were assigned on the protocol nodes) .
The following user name and password options specify the credentials used for the creation of an admin user within Keystone for object and container access. The system prompts for these during spectrumscale deploy pre-check and spectrumscale deploy if they have not already been configured by spectrumscale. The following example shows how to configure these options to associate user names and passwords: ./spectrumscale config object -au -admin -ap -dp
The -ADMINUSER(-au) option specifies the admin user name. This credential is for the Keystone administrator. This user can be local or on remote authentication server based on authentication type used.
The -SWIFTUSER(-su) option specifies the Swift user name. The -ADMINUSER(-au) option specifies the admin user name. This credential is for the Swift services administrator. All Swift services are run in this user's context. This user can be local or on remote authentication server based on authentication type used.
The -MULTIREGION(-mr) option enables the multi-region object deployment feature. The -REGIONNUMBER(-rn) option specifies the region number.
The -s3 option specifies whether the S3 (Amazon Simple Storage Service) API should be enabled.
-t ADMINTOKEN sets the admin_token property in the keystone.conf file which allows access to Keystone by token value rather than user/password. When installing with a local Keystone, by default the installer dynamically creates the admin_token used during initial configuration and deletes it when done. If set explicitly with -t, admin_token is not deleted from keystone.conf when done. The admin token can also be used when setting up a remote Keystone server if that server has admin_token defined.
Adding export IPs
Export IPs or CES
public
IPs are used to export data via the protocols (NFS, SMB, Object). File and Object
clients use these public IPs to access data on GPFS file
systems. Export IPs are shared between all protocols and are organized in a public IP pool (there
can be fewer public IPs than protocol nodes).
- To add Export IPs to your cluster, issue either this
command:
./spectrumscale config protocols --export-ip-pool EXPORT_IP_POOL
Or this command:
./spectrumscale config protocols -e EXPORT_IP_POOL
Within these commands, EXPORT_IP_POOL is a comma-separated list of IP addresses.
- To view the current configuration, issue the following command:
./spectrumscale node list
To view the CES shared root and the IP pool, issue the following command:./spectrumscale config protocols -l
To view the object configuration, run the following command:./spectrumscale config object -l
Running the spectrumscale deploy command
After adding the previously-described protocol-related definition and configuration information to the cluster definition file you can deploy the protocols specified in that file.
./spectrumscale deploy --pr
This is not required, however, because
spectrumscale deploy with no argument also runs this.You can also use the mmnetverify command to identify any network problems before doing the deployment. For more information, see mmnetverify command.
spectrumscale deploy
- Performs pre-deploy checks.
- Creates file systems and deploys protocols as specified in the cluster definition file.
- Performs post-deploy checks.
You can explicitly specify the --precheck (-pr) option to perform a dry run of pre-deploy checks without starting the deployment. Alternatively, you can specify the --postcheck (-po) option to perform a dry run of post-deploy checks without starting the deployment. These options are mutually exclusive.
$ /usr/lpp/mmfs/bin/mmlscluster --ces
What to do next
Upon completion of the tasks described in this topic, you have deployed additional functionality to an active GPFS cluster. This additional functionality may consist of file systems, protocol nodes, specific protocols, and authentication. Although authentication can be deployed at the same time as protocols, we have separated the instructions for conceptual purposes. Continue with the setting up authentication step if you have not yet set up authentication. See the topic Setting up authentication for instructions on how to set up authentication.
- add file systems
- add protocol nodes
- enable additional protocols
- configure and enable authentication