scalectl cluster command

Creates and manages IBM Storage Scale cluster.

Synopsis

scalectl cluster create --node-name {NodeName} [--admin-node-name {AdminNodeName}][--cluster-name {ClusterName}][--domain-name {DomainName}][--node-comment {NodeComment}] [--port-number {PortNumber}][--remote-file-copy-cmd {RemoteFileCopy}][--remote-shell-cmd {RemoteShellCommand}][--autoload]
       
Or
scalectl cluster list [--max-items {MaxItemNumber}] [--no-pagination] [--view  {basic| ces | cnfs | node-comments}] 
       
Or
scalectl cluster manager update [-m {ManagerNodeName}]
       
Or
scalectl cluster migrate [--precheck]
       
Or
scalectl cluster remote {add [--contact-nodes {Node[,Node...]}] [-n {RemoteClusterName}] | authorize [--cluster-id {ClusterID}] [-F {ResourceDefinitionFile}] [-n {ClusterName}]  | delete {ClusterName} -f | get {ClusterName} [--fields {FieldName}] | list [--fields {FieldName}] [-n {MaxItemNumber}] [-x] [-p {PageSize}] [-t {PageToken}] [-view {basic | full}] | refresh {ClusterName}| unauthorize {ClusterName}| update [--cipher-list] [--cluster-type {accessing | owning | owning-and-accessing}] [--contact-nodes {Node[,Node...]}] [-F {ResourceDefinitionFile}}][-n {NodeName}] [--resource-update]}
       

Availability

Available on all IBM Storage Scale editions.

Description

Use the scalectl cluster command to create and manage a IBM Storage Scale cluster for a set of nodes.

Parameters

create
Creates a cluster.
Note: When a cluster is created, the first node is automatically designated as both a quorum and manager node. You cannot select the node designations during the creation of the first cluster node.
--admin-node-name {AdminNodeName}
Specifies a separate node name for administrative commands.
-C or --cluster-name {ClusterName}
Specifies the cluster name.
-U or --domain-name {DomainName}
Specifies the UID domain name for the cluster.
--node-comment {NodeComment}
Adds a comment with additional information about the node.
-N or --node-name {NodeName}
Specifies the hostname or IP address of the node for IBM Storage Scale I/O daemon-to-daemon communication. If the --admin-node-name parameter is not specified, this address is also used for communication between administration daemons.
--port-number {PortNumber}
Specifies the communication port number for the I/O daemon. The default value is 1191.
-R or --remote-file-copy-cmd {RemoteFileCopy}
Specifies the fully qualified path name for the remote file-copy program of the mm-command CLI. The default value is /usr/lpp/mmfs/bin/scaleadmremotetransfer.
-R or --remote-shell-cmd {RemoteShellCommand}
Specifies the fully qualified path name for the remote shell program of the mm-command CLI. The default value is /usr/lpp/mmfs/bin/scaleadmremoteexecute.
-A or --autoload
Specifies that GPFS daemons are to be automatically started when nodes come up.
list
Displays details about IBM Storage Scale clusters.
-n or --max-items {MaxItemNumber}
Specifies the maximum number of items to list at a time.
-x or --no-pagination
Disables subsequent pagination tokens on the client side.
--view {basic| ces | cnfs | node-comments}
Specifies the view for snapshot contents. The possible values are basic, ces, cnfs, and node-comments. The default value is basic.
manager
Updates the cluster manager. For more information, see GPFS cluster manager.
update
Assigns a new cluster manage node. To run this command, you must have the RBAC permission for the update action on the /scalemgmt/v3/clusters/manager resource.
-m or --manager {ManagerNodeName}
Specifies the manager node name.
migrate
Migrates a classic IBM Storage Scale cluster to the use of IBM Storage Scale native REST API. Restart the I/O daemons on each node in the cluster managed by the IBM Storage Scale native REST API to ensure proper communication with the newly migrated administration daemons.
-p or --precheck
Runs a precheck for cluster migration. Use this option to test for issues before running the full migration. If any issues are found, resolve them and re-run the precheck command until all issues are cleared.
remote
Adds and manages the remote cluster.
add
Adds an owning cluster to the set of remote clusters that are known to the accessing cluster. To mount file systems that belong to another IBM Storage Scale cluster, you must first make the nodes of the accessing cluster aware of the owning cluster that owns the file system. This command allows the node of an accessing cluster to be aware of an owning cluster. The following information is required from the administrator of the owning IBM Storage Scale cluster:
  • The name of the owning cluster.
  • The names or IP addresses of a few nodes that belong to the owning cluster.

During the add process, the accessing cluster initiates the cluster key-exchange protocol. This protocol exchanges its key and retrieves the key of the owning cluster.

Each cluster is managed independently, and changes between the clusters are not automatically synchronized, unlike the synchronization that occurs between nodes within a cluster. After an owning cluster is defined by using the scalectl cluster remote add command, the information about that cluster is automatically propagated across all nodes in the accessing cluster. However, if the administrator of the owning cluster renames the cluster, deletes or modifies contact nodes, or changes the public key file, the information in your cluster becomes outdated. The administrator of the owning IBM Storage Scale cluster is responsible for notifying you of such changes. To update this information, use the scalectl cluster remote update or scalectl cluster remote refresh commands.

--contact-nodes {Node[,Node...]}
Specifies the contact nodes for local cluster to contact remote cluster. Nodes are identified by hostname or IP address.
-n or --name {RemoteClusterName}
Specifies the remote cluster name.
authorize
Authorizes an accessing remote cluster to access resources on the owning cluster. This command authorizes an accessing cluster to mount a specific file system and access a defined list of allowed filesets for that file system. When the allowed list is created, any filesets that are not included in the list cannot be accessed from the authorized accessing clusters.
--cluster-id {ClusterID}
Specifies the cluster ID.
-F or --file {ResourceDefinitionFile}
Specifies a JSON-formatted resource definition file.
-n or --name {ClusterName}
Specifies the remote cluster name.
delete {ClusterName}
Deletes an owning cluster definition from the accessing cluster.
-f or --force
Specifies the force deletion of the owning cluster definition.
get {ClusterName}
Retrieves details about a remote cluster.
--fields {FieldName}
Filters output to display only the specified field names.
--view {basic | full}
Specifies the view for remote cluster contents. The possible values are basic, and full. The default value is basic. The full view displays all available fields. The basic view omits filesets that exist on the remote cluster.
list
Displays information about remote clusters.
--fields {FieldName}
Filters output to display only the specified field names.
-n or --max-items {MaxItemNumber}
Specifies the maximum number of items to list at a time.
-x or --no-pagination
Disables subsequent pagination tokens on the client side.
-p or --page-size {PageSize}
Specifies the number of items to list per API request.
-t or --page-token {PageToken}
Specifies the page token that is received from previous list command. You can provide this page token to retrieve the next page.
--view {basic | full}
Specifies the view for remote cluster contents. The possible values are basic, and full. The default value is basic. The full view displays all available fields. The basic view omits filesets that exist on the remote cluster.
refresh {ClusterName}
Refreshes the information of an owning cluster on the accessing cluster. This command initiates a new cluster key exchange protocol from the accessing cluster for an existing owning cluster definition.

unauthorize {ClusterName}
Deletes authorization of an accessing cluster to access the resources of an owning cluster.

update
Updates the information of an owning cluster that is associated with an accessing cluster.
--cluster-type {accessing | owning | owning-and-accessing}
Specifies the relationship of the remote cluster to the local cluster. The possible values are owning-and-accessing, accessing, and owning. The default value is owning.
--contact-nodes {Node[,Node...]}
Specifies the contact nodes for local cluster to contact remote cluster. Nodes are identified by hostname or IP address.
-F or --file {ResourceDefinitionFile}
Specifies a JSON-formatted resource definition file.
-n or --name {NodeName}
Specifies the remote cluster name.
--resource-update
Updates the resource authorization information. This flag is required to update the resource authorization.

Global flags

The following global flags can be used with any scalectl command and subcommand:
--bearer
If true, reads the OIDC_TOKEN from the environment and sends it as the authorization bearer header for the request. Use this flag with the --url option.
--cert {Certificate}
Specifies the path to the client certificate file for authentication.
--debug {Filepath[="stderr"]}
Enables the debug logging for the current request. Accepts an absolute file path to store logs by using --debug=<file>. If no file path is specified, logs are sent to stderr.
-h or --help
Lists the help for scalectl commands.
--domain {DomainName}
Sets the domain for the request. The default value is StorageScaleDomain.
--insecure-skip-tls-verify
If true, skips to verify the server certificate for validity. This option makes HTTPS connections insecure.
--json
Displays output in JSON format.
--key {PrivateKeyFile}
Specifies the path to the client certificate private key file for authentication.
--url {ip_address}
Sends the request over HTTPS to the specified endpoint <FQDN/IP>:<port>. For IPv6 address, use square brackets. For example, [IPv6]:<port>. If no port specified, 46443 is used by default.
--version
Specifies the scalectl build information.

Exit status

0
Successful completion.
nonzero
A failure occurred.

Security

You must have the specific role-based access control (RBAC) permission to run the command. For more information, see Role-based access control.

Examples

  1. To create a cluster, issue the following command:
    scalectl cluster create -N mos-dev-2  --cluster-name "production1" --node-comment "Initial cluster node" -A 
           
    A sample output is as follows:
    Warning: Use the mmchlicense command to designate licenses as needed.
    
             The cluster was created with the tscCmdAllowRemoteConnections configuration parameter set to 'no'.
             If a remote cluster is established with another cluster whose release level (minReleaseLevel) is less than 5.1.3.0,
             change the value of tscCmdAllowRemoteConnections in this cluster to 'yes'.
    
    Successfully created cluster production1
  2. To list the details of the cluster, issue the following command:
    scalectl cluster list 
           
    A sample output is as follows:
    Storage Scale cluster information
    =================================
      GPFS cluster name        : production1
      GPFS cluster id          : 13657371656500000000
      GPFS UID domain          : production1
      Remote shell command     : /usr/lpp/mmfs/bin/scaleadmremoteexecute
      Remote file copy command : /usr/lpp/mmfs/bin/scaleadmremotetransfer
      Repository type          : CCR
      Manager                  : <none>
    
      Node | Admin node name |  Daemon node name  | IP address   | Designation
    =========================================================================
      1    | mos-dev-2       |  mos-dev-2         | 9.XXX.XXX.39 |  quorum,manager
  3. To list the details of the cluster with the node comment, issue the following command:
     scalectl cluster list --view node-comments 
           
    A sample output is as follows:
    Storage Scale cluster information
    =================================
      GPFS cluster name : testcluster-1  
      GPFS cluster id   : 11856801038225182420  
      Manager           : testcluster-1  
    
      Node Number | Daemon node name                  | Node Comment  
    ================================================================
      1           | testcluster-1.fyre.ibm.com |               
      2           | testcluster-2.fyre.ibm.com |               
      3           | testcluster-3.fyre.ibm.com |               
      4           | testcluster-4.fyre.ibm.com |               
    
    
  4. To precheck cluster for migration, issue the following command:
    scalectl cluster migrate --precheck
    A sample output is as follows:
    Migration PreCheck command completed successfully.
  5. To migrate the classic IBM Storage Scale cluster to the use of IBM Storage Scale native REST API, issue the following command:
    scalectl cluster migrate
    A sample output is as follows:
    Migration command completed successfully.  The cluster was migrated.
  6. To authorize accessing cluster to access the resource of an owning cluster, issue the following command:
    scalectl cluster remote authorize --name accessingCluster1 --cluster-id 3793952426292428382 --file resourcefile.json
    A sample output is as follows:
    Remote access authorized for cluster 'accessingCluster1'.
    
    Remote access granted for the following filesystems:
    
      Filesystem | Disposition | Access Type | Root Squash
    
    ======================================================
    
      fs1              | GRANT       | rw                 | Root allowed
    
    Remote access granted for the following filesets:
    
      Fileset                   | Disposition  | Filesystem
    
    =============================================
    
      root                       | GRANT        | fs1
    
      fileset1                 | GRANT        | fs1
    
      fileset2                 | GRANT        | fs1
    
    
    A sample resourcefile.json contains the following information:
    {
      "version": 0,
      "resources": {
        "filesystems": [
          {
            "name": "fs1",
            "disposition":"GRANT",
            "access_type": "rw"
          }
          ],
        "filesets": [
            {
              "name": "root",
              "fs_name": "fs1",
              "disposition": "GRANT"
            },
            {
              "name": "fileset1",
              "fs_name": "fs1",
              "disposition": "GRANT"
            },
            {
              "name": "fileset2",
              "fs_name": "fs1",
              "disposition": "GRANT"
            }
          ]
      }
    }
  7. To add an owning cluster to the nodes of an accessing cluster, issue the following command:
    scalectl cluster remote add --name owingCluster1 --contact-nodes node1
    
    
    A sample output is as follows:
    Remote cluster 'owningCluster1' added.
  8. To refresh information of an owning cluster on an accessing cluster, issue the following command:
    scalectl cluster remote refresh owningCluster1
    
    
    A sample output is as follows:
    Remote cluster 'owningCluster1' added.
  9. To update information of an owning cluster on an accessing cluster, issue the following command:
    scalectl cluster remote update owningCluster1 --contact-nodes node1,node2 --cluster-type owning
    
    
    A sample output is as follows:
    Successfully updated information for remote cluster 'owningCluster1'.
  10. To get information of a remote cluster, issue the following command:
    scalectl cluster remote get owingCluster1
    A sample output is as follows:
    Remote cluster information
    
    ==========================
    
      Remote cluster name                                                      : owingCluster1
    
      Cluster id                                                               : 13470923574217973061
    
      SHA digest                                                               : f7347e2427b4f059145e086ca1dd4e18a68cea3804e0024eefec00d11dca6c4d
    
      Contact nodes                                                            : mosdev-31
    
      Relationship of the remote cluster to our local cluster                  : owning
    
      Remote filesystems accessed from owingCluster1                           :  None
  11. To list information of all accessing cluster, issue the following command:
    scalectl cluster remote list --view basic
    
    
    A sample output is as follows:
      Remote cluster name            | Cluster id          | Cipher list | SHA digest                                                       | Contact nodes | Relationship of the remote     | Filesystems
                                     |                     |             |                                                                  |               | cluster to our local cluster   |
    ==================================================================================================================================================================================================
      scale-cluster-2.openstacklocal | 8046321317524150631 |             | 0cedcdef951f81768865c8402132eced242ca335bd6dade00d55e3e201a92c99 | 9.114.207.52  | owning                         |
    
  12. To delete the definition of an owning cluster from an accessing cluster, issue the following command:
    scalectl cluster remote delete owningCluster1
    
    
    A sample output is as follows:
    Definition of owning cluster 'owningCluster1' deleted.

See also

Location

/usr/lpp/mmfs/bin