Deploying protocols

Use this information to deploy protocols in an IBM Spectrum Scale™ cluster using the spectrumscale installation toolkit.

Deployment of protocol services is performed on a subset of the cluster nodes that have been designated as protocol nodes using the ./spectrumscale node add node_name -p command. Protocol nodes have an additional set of packages installed that allow them to run the NFS, SMB and Object protocol services.

Data is served through these protocols from a pool of addresses designated as Export IP addresses or CES "public" IP addresses using ./spectrumscale config protocols -e IP1,IP2,IP3... or added manually using the mmces address add command. The allocation of addresses in this pool is managed by the cluster, and IP addresses are automatically migrated to other available protocol nodes in the event of a node failure.

Before deploying protocols, there must be a GPFS™ cluster that has GPFS started and at least one file system for the CES shared file system.

Notes:
  1. All the protocol nodes must be running the supported operating systems, and the protocol nodes must be all Power® (in big endian mode) or all Intel. Although the other nodes in the cluster could be on other platforms and operating systems.

    For information about supported operating systems for protocol nodes and their required minimum kernel levels, see IBM Spectrum Scale FAQ in IBM® Knowledge Center

  2. The packages for all protocols are installed on every node designated as a protocol node; this is done even if a service is not enabled in your configuration.
  3. Services are enabled and disabled cluster wide; this means that every protocol node serves all enabled protocols.
  4. If SMB is enabled, the number of protocol nodes is limited to 16 nodes.
  5. The spectrumscale installation toolkit no longer supports adding protocol nodes to an existing ESS cluster prior to ESS version 3.5.

Defining a shared file system

To use protocol services, a shared file system must be defined. If the install toolkit is used to install GPFS, NSDs can be created at this time and, if associated with a file system, the file system is then be created during deployment. If GPFS has already been configured, the shared file system can be specified manually or by re-running the spectrumscale install command to assign an existing NSD to the file system. If re-running spectrumscale install, be sure that your NSD servers are compatible with the spectrumscale installation toolkit and contained within the clusterdefinition.txt file.
Note: Be aware that if you do not configure a protocol's shared file system, the install toolkit automatically creates one for you named ces_shared mounted at /ibm/ces_shared. This works only if you have created at least one NSD and that NSD is not already assigned to a file system.

The spectrumscale config protocols command can be used to define the shared file system (-f) and mount point (-m):

usage: spectrumscale config protocols [-h] [-l] [-f FILESYSTEM]
                                      [-m MOUNTPOINT]
For example: $ ./spectrumscale config protocols -f cesshared -m /gpfs/cesshared.

To show the current settings, issue this command:

$ ./spectrumscale config protocols --list
[ INFO  ] No changes made. Current settings are as follows:
[ INFO  ] Shared File System Name is cesshared
[ INFO  ] Shared File System Mountpoint is /gpfs/cesshared

Adding nodes to the cluster definition file

To deploy protocols on nodes in your cluster, they must be added to the cluster definition file as protocol nodes.

Run the following command to designate a node as a protocol node:

./spectrumscale node add NODE_IP -p 

Enabling protocols

In order to enable or disable a set of protocols, the spectrumscale enable and spectrumscale disable commands should be used. For example:

$ ./spectrumscale enable smb nfs
[ INFO  ] Enabling SMB on all protocol nodes.
[ INFO  ] Enabling NFS on all protocol nodes.

The current list of enabled protocols is shown as part of the spectrumscale node list command output; for example:

$ ./spectrumscale node list
[ INFO  ] List of nodes in current configuration:
[ INFO  ] [Installer Node]
[ INFO  ] 9.71.18.169
[ INFO  ]
[ INFO  ] [Cluster Name]
[ INFO  ] ESDev1
[ INFO  ]
[ INFO  ] [Protocols]
[ INFO  ] Object : Disabled
[ INFO  ] SMB : Enabled
[ INFO  ] NFS : Enabled
[ INFO  ]
[ INFO  ] GPFS Node                    Admin  Quorum  Manager  NSD Server  Protocol
[ INFO  ] ESDev1-GPFS1                   X       X       X                    X
[ INFO  ] ESDev1-GPFS2                                   X                    X
[ INFO  ] ESDev1-GPFS3                                   X                    X
[ INFO  ] ESDev1-GPFS4                   X       X       X          X
[ INFO  ] ESDev1-GPFS5                   X       X       X          X

Configuring Object

If the object protocol is enabled, further protocol-specific configuration is required; these options are configured using the spectrumscale config object command, which has the following parameters:

usage: spectrumscale config object [-h] [-l] [-f FILESYSTEM] [-m MOUNTPOINT]
                                   [-e ENDPOINT] [-o OBJECTBASE]
                                   [-i INODEALLOCATION] [-t ADMINTOKEN]
                                   [-au ADMINUSER] [-ap ADMINPASSWORD]
                                   [-SU SWIFTUSER] [-sp SWIFTPASSWORD]
                                   [-dp DATABASEPASSWORD]
                                   [-mr MULTIREGION] [-rn REGIONNUMBER]
                                   [-s3 {on,off}]

The object protocol requires a dedicated fileset as its back-end storage; this fileset is defined using the --filesystem (-f), --mountpoint(-m) and --objectbase (-o) flags to define the file system, mount point, and fileset respectively.

The --endpoint(-e) option specifies the host name that is used for access to the file store. This should be a round-robin DNS entry that maps to all CES IP addresses; this distributes the load of all keystone and object traffic that is routed to this host name. Therefore, the endpoint is an IP address in a DNS or in a load balancer that maps to a group of export IPs (that is, CES IPs that were assigned on the protocol nodes) .

The following user name and password options specify the credentials used for the creation of an admin user within Keystone for object and container access. The system prompts for these during spectrumscale deploy pre-check and spectrumscale deploy if they have not already been configured by spectrumscale. The following example shows how to configure these options to associate user names and passwords: ./spectrumscale config object -au -admin -ap -dp

The -ADMINUSER(-au) option specifies the admin user name. This credential is for the Keystone administrator. This user can be local or on remote authentication server based on authentication type used.

The -ADMINPASSWORD(-ap) option specifies the password for the admin user.
Note: You are prompted to enter a Secret Encryption Key which is used to securely store the password. Choose a memorable pass phrase which you are prompted for each time you enter the password.

The -SWIFTUSER(-su) option specifies the Swift user name. The -ADMINUSER(-au) option specifies the admin user name. This credential is for the Swift services administrator. All Swift services are run in this user's context. This user can be local or on remote authentication server based on authentication type used.

The -SWIFTPASSWORD(-sp) option specifies the password for the Swift user.
Note: You are prompted to enter a Secret Encryption Key which is used to securely store the password. Choose a memorable pass phrase which you are prompted for each time you enter the password.
The -DATABASEPASSWORD(-dp) option specifies the password for the object database.
Note: You are prompted to enter a Secret Encryption Key which is used to securely store the password. Choose a memorable pass phrase which you are prompted for each time you enter the password

The -MULTIREGION(-mr) option enables the multi-region object deployment feature. The -REGIONNUMBER(-rn) option specifies the region number.

The -s3 option specifies whether the S3 (Amazon Simple Storage Service) API should be enabled.

-t ADMINTOKEN sets the admin_token property in the keystone.conf file which allows access to Keystone by token value rather than user/password. When installing with a local Keystone, by default the installer dynamically creates the admin_token used during initial configuration and deletes it when done. If set explicitly with -t, admin_token is not deleted from keystone.conf when done. The admin token can also be used when setting up a remote Keystone server if that server has admin_token defined.

Attention: If SELinux is disabled during installation of IBM Spectrum Scale for object storage, enabling SELinux after installation is not supported.

Adding export IPs

Note: This is mandatory for protocol deployment.

Export IPs or CES "public" IPs are used to export data via the protocols (NFS, SMB, Object). File and Object clients use these public IPs to access data on GPFS file systems. Export IPs are shared between all protocols and are organized in a public IP pool (there can be fewer public IPs than protocol nodes).

Note: Export IPs must have an associated hostname and reverse DNS lookup must be configured for each.
  1. To add Export IPs to your cluster, run either this command:
    $ ./spectrumscale config protocols --export-ip-pool EXPORT_IP_POOL

    Or this command:

    $ ./spectrumscale config protocols -e EXPORT_IP_POOL

    Within these commands, EXPORT_IP_POOL is a comma-separated list of IP addresses.

  2. To view the current configuration, run the following command:
    $ ./spectrumscale node list
    To view the CES shared root and the IP pool, run the following command:
    $ ./spectrumscale config protocols -l
    To view the Object configuration, run the following command:
    $ ./spectrumscale config object -l

Running the spectrumscale deploy command

After adding the previously-described protocol-related definition and configuration information to the cluster definition file you can deploy the protocols specified in that file.

To perform deploy checks prior to deploying, use the spectrumscale deploy command with the --pr argument:
./spectrumscale deploy --pr
This is not required, however, because spectrumscale deploy with no argument also runs this.
Use the following command to deploy protocols:
spectrumscale deploy
Note: You are prompted for the Secret Encryption Key that you provided while configuring object and/or authentication unless you disabled prompting.
This does the following:
  • Performs pre-deploy checks.
  • Creates file systems and deploys protocols as specified in the cluster definition file.
  • Performs post-deploy checks.

You can explicitly specify the --precheck (-pr) option to perform a dry run of pre-deploy checks without starting the deployment. Alternatively, you can specify the --postcheck (-po) option to perform a dry run of post-deploy checks without starting the deployment. These options are mutually exclusive.

After a successful deployment, you can verify the cluster and CES configuration by running this command:
$ /usr/lpp/mmfs/bin/mmlscluster --ces

What to do next

Upon completion of the tasks described in this topic, you have deployed additional functionality to an active GPFS cluster. This additional functionality may consist of file systems, protocol nodes, specific protocols, and authentication. Although authentication can be deployed at the same time as protocols, we have separated the instructions for conceptual purposes. Continue with the setting up authentication step if you have not yet set up authentication. See the topic Setting up authentication for instructions on how to set up authentication.

You can rerun the spectrumscale deploy command in the future to do the following:
  • add file systems
  • add protocol nodes
  • enable additional protocols
  • configure and enable authentication