scalectl filesystem command
Creates and manages the file system.
Synopsis
scalectl filesystem create [-k {posix | nfs4 | all}] [-a {yes | no}] [-A {yes | no | automount}] [-j {cluster | scatter}] [-B {BlockSize}] [-r {DefaultDataReplicas}] [-m {DefaultMetadataReplicas}] [-T {DefaultMountPoint}] [-d {DiskDesc[;DiskDesc...]}] [-t {DriveLetter}] [-z {yes | no}] [-Q {yes | no}] [-E {yes | no}] [-F {FilePath}] [-filesetdf {yes | no}] [-c {yes | no}] [-I {MaxInodesLimit}] [--inode-segment-manager {yes | no}] [-i {InodeSize}] [-x {NumInodes}] [-D {nfs4 | posix}] [-l {LogReplicas}] [-L {LogFileSize}] [-R {MaxDataReplicas}] [-M {MaxMetaReplicas}] [-P {Priority}] [-n {FilesystemName}] [---nfs4-owner-write-acl {yes | no}] [-N {NumNodes}] [---perfileset-quotas {yes | no}] [-K {no | whenpossible | always}] [-S {yes | no | relatime}] [--verify-disks {yes | no}] [--version {Version}]
Or
scalectl filesystem delete {FilesystemName} [-p]
Or
scalectl filesystem disk {add {FilesystemName} [--block-group-factor {BlockGroupFactor}] [--disk-usage {localCache | dataAndMetadata | metadataOnly | dataOnly |condescend}] [-g {FailureGroup}] [--layout-map {Cluster | Scatter}] [-n {DiskName}] [--performance-pool] [--pool-name {PoolName}] [-t {no | nvme | scsi | auto}] [--verify-disks {yes | no}] [--write-affinity {yes | no}] [--write-affinity-depth {WriteAffinityDepth}] | batchAdd {FilesystemName} [-F {FilePath}] [--verify-disks {yes | no}] | batchDelete {FilesystemName} [-n {DiskName}] [-m ] [-c {NodeClass[,NodeClass...]}] [--pit-continues-on-error] [-r] [---qos-class {QosClass}] [-b {lenient-round-robin | strict-round-robin | no_rebalance}] [-N {Node[,Node...]}] | delete {FilesystemName} [-m] [-c {NodeClass[,NodeClass...]}] [--pit-continues-on-error] [-r] [--qos-class {QosClass}] [-b {lenient-round-robin | strict-round-robin | no_rebalance}] [-N {Node[,Node...]}]| get {DiskName} [--fields {FieldName}] | list [--fields {FieldName}] [-n {MaxItemNumbers}] [-x] [-p {PageSize}] [-t {PageToken}] | quorum [get {FilesystemName}]
Or
scalectl filesystem get {FilesystemName} [--fields {FieldName}]
Or
scalectl filesystem list [--all-domains] [--fields {FieldName}] [-n {MaxNumItemNumber}] [-x] [-p {PageSize}] [-t {PageToken}]
Or
scalectl filesystem manager update [-m {MangerNodeName}]
Or
scalectl filesystem mount [-o {MountOptions}] [-T {MountPoint}] [-c {NodeClass[,NodeClass...]}] [-N {Node[,Node...]}]
Or
scalectl filesystem mountAll [-o {MountOptions}] [-c {NodeClass[,NodeClass...]}] [-N {Node[,Node...]}]
Or
scalectl filesystem mountState {FilesystemName} [-C {ClusterName}] [--fields {FieldName}]
Or
scalectl filesystem pool {get {FilesystemName} {PoolName} [--fields {FieldName}] | list [-n {MaxNumItemNumber}] [-x] [-p {PageSize}] [-t {PageToken}] | update [---block-group-factor {BlockFroupFactor}] [----write-affinity-depth {WriteAffinityDepth}]}
Or
scalectl filesystem remote {add [-A {yes | no | automount}] [-o {MountOptions}] [-T {MountPoint}] [-P {MountPriority}] [-n {DeviceName}] [-C {OwningClusterName}] ]--remote-name {FilesystemName}] | update [-A {yes | no | automount}] [-o {MountOptions}] [-T {MountPoint}] [-P {MountPriority}] [-C {OwningClusterName}] ]--remote-name {FilesystemName}] | delete [-p]}
Or
scalectl filesystem snapshot {batchDelete {FilesystemName} [-I {:snapshot1[,:snapshot2]}] [-c {NodeClass[,NodeClass...]}] [--pit-continues-on-error] [-N {Node[,Node...]}]|create {FilesystemName} [---expiration-time {YYYY-MM-DDTHH:MM:SSZ}] [-n {SnapshotName}] | delete {FilesystemName} {SnapshotName} [-c {NodeClass[,NodeClass...]}] [--pit-continues-on-error] [-N {Node[,Node...]}] | get {FilesystemName} {SnapshotName} [-fast] [--view {basic | data}] | list [--all-domains] [--fast] [-n {MaxItemNumber}] [-x] [-p {PageSize}] [-t {PageToken}] [--view {basic | data}] | listSnapdir {FilesystemName}}
Or
scalectl filesystem unmount {FilesystemName} [-C {ClusterName}] [-f] [-c {NodeClass[,NodeClass...]}] [-N {Node[,Node...]}]
Or
scalectl filesystem unmountAll [-f] [-c {NodeClass[,NodeClass...]}] [-N {Node[,Node...]}]
Or
scalectl filesystem update [-k {posix | nfs4 | all}] [-a {yes | no}] [-A {yes | no | automount}] [-r {DefaultDataReplicas}] [-m {DefaultMetadataReplicas}] [-T {DefaultMountPoint}] [-t {DriveLetter}] [-z {yes | no}] [-Q {yes | no}] [-E {yes | no}] [--filesetdf {yes | no}] [-c {yes | no}] [-I {InodesLimit}] [--inode-segment-manager {yes | no}] [-x {NumInodes}] [-D {nfs4 | posix}] [-l {LogReplicas}] [-L {LogFileSize}] [-P {Priority}] [-n {FilesystemName}] [--nfs4-owner-write-acl {yes | no}][-N {NumNodes}] [--perfileset-quotas {yes | no}] [-K {no | whenpossible | always}] [-S {yes | no}] [-V {Version}]
Availability
Available on all IBM Storage Scale editions.
Description
Use the scalectl filesystem command to manage the file system that is registered with the IBM Storage Scale cluster. A file system consists of a set of disks that store file data, file metadata, and supporting entities, such as quote files and recovery logs.
Parameters
- create
- Creates an IBM Storage Scale file system. You can mount a maximum of 256 file systems in an IBM
Storage Scale cluster at any one time, including remote file systems. The values that you set for
block size, replication, and the maximum number of files (number of nodes) can affect the
performance of the file system. The operation attributes for this command are long-running operation
(LRO) and cancel operation. To run this command, you must have the RBAC permission for the
create action on the /scalemgmt/v3/filesystems resource.
- The block size, subblock size, and number of subblocks per block of a file system are set when the file system is created and cannot be changed later.
- All the data blocks in a file system have the same block size and the same subblock size. The data blocks and subblocks in the system storage pool and user storage pools have the same size.
- All the metadata blocks in a file system have the same block size and the same subblock size.
- The block size cannot exceed the value of the maxblocksize cluster attribute. Use the mmchconfig command to set this maxblocksize attribute.
- -k or --acl-semantics {posit | nfs4 | all}
- Specifies the type of authorization that is supported by the file system. The possible values are posit, nfs4, and all. The default value is all.
- -a or --auto-inode-limit {yes | no}
- Specifies whether to automatically increase the maximum number of inodes per inode space in the file system. The possible values are yes and no. The default value is no.
- -A or --auto-mount {yes | no | automount}
- Specifies when to mount the file system. The possible values are yes, automount and no. The default value is yes.
- -j or --block-allocation {cluster | scatter}
- Specifies the block-allocation map type. The possible values are cluster and scatter. The default value is cluster.
- -B or --block-size {BlockSize}
- Specify the size of data blocks.
- -r or --default-data-replicas {DefaultDataReplicas}
- Specifies the default number of copies of each data block for a file.
- -m or --default-metadata-replicas {DefaultMetadataReplicas}
- Specifies the default number of copies of inodes, directories, and indirect blocks for a file.
- -T or --default-mount-point {DefaultMountPoint}
- Specifies the default mount point for the file system.
- -d or --disks {DiskDesc[;DiskDesc...]}
- Specifies the disk descriptor.
- -t or --drive-letter {DriveLetter}
- Specifies the drive letter to use when mounting the file system on windows.
- -z or --enable-dmapi {yes | no}
- Enables or disables DMAPI on the file system. The possible values are yes and no. The default value is no.
- -Q or --enable-quotas {yes | no}
- Activates quotas automatically when the file system is mounted. The possible values are yes and no. The default value is no.
- -E or --exact-mtime {yes | no}
- Specify whether to report exact mtime values. The possible values are yes and no. The default value is yes.
- -F or --file {FilePath}
- Specifies the JSON-formatted or GPFS stanza-formatted file path.
- --filesetdf {yes | no}
- Reports df information at the independent file set level when enabled. The possible values are yes and no. The default value is no.
- -c or --flush-on-close {yes | no}
- Enables or disables the automatic flushing of disk buffers, when the system closes the files that were opened for write operations. The possible values are yes and no. The default value is no.
- -I or --inode-limit {MaxInodesLimit}
- Specifies the maximum number of files in the file system.
- --inode-segment-manager {yes | no}
- Enables the inode segment manager. The possible values are yes and no. The default value is yes.
- -i or --inode-size {InodeSize}
- Specify the byte size of the inode.
- -x or --inodes-prealloc {NumInodes}
- Specify the number of inodes that the system immediately preallocates.
- -D or --lock-semantics {nfs4 | posix}
- Specify whether a deny-write open lock blocks writing, which is required for nfsv4. The possible values are nfs4 and posix. The default value is nfs4.
- -l or --log-replicas {LogReplicas}
- Specifies the number of recovery log replicas.
- -L or --logfile-size {LogFileSize}
- Specify the size of internal log files.
- -R or --max-data-replicas {MaxDataReplicas}
- Specify the maximum number of copies of data blocks for a file.
- -M or --max-metadata-replicas {MaxMetaReplicas}
- Specify the maximum copies of inodes, directories, and indirect blocks for a file.
- -P or --mount-priority {Priority}
- Specify mount priority for the file system.
- -n or --name {FilesystemName}
- Specifies the file system name.
- --nfs4-owner-write-acl {yes | no}
- Specifies the nfsv4 implicit owner WRITE_ACL permission. The possible values are yes and no. The default value is yes.
- -N or --num-nodes {NumNodes}
- Specify the estimated number of nodes that mount the file system in the local cluster and all remote clusters.
- --perfileset-quotas {yes | no}
- Sets the scope of user and group quota limit checks to the individual file set level. The possible values are yes and no. The default value is no.
- -K or --strict-replication {no | whenpossible | always}
- Specifies whether strict replication is enforced. The possible values are no, whenpossible, and always. The default value is whenpossible.
- -S or --suppress-atime {yes | no | relatime}
- Controls how the file attribute atime is updated. The possible values are yes, no, and relatime. The default value is relatime.
- --verify-disks {yes | no}
- Verify whether disks are IBM Storage Scale disks. The possible values are yes, and no. The default value is yes.
- --version {Version}
- Specifies the file system version.
- delete {FilesystemName}
- Removes all the structures for the IBM Storage Scale file
system from the nodes in the cluster. You must unmount the file system before deleting it by using
the mmdelfs command. Use the --permanently-damaged flag to
force the removal of a file system from the cluster data if the file system disks cannot be marked
as available. The operation attribute for this command is LRO. To run this command, you must have
the RBAC permission for the delete action on the
/scalemgmt/v3/filesystems/{name} resource.
- -p or --permanently-damaged
- Indicates that disks are permanently damaged and file system deletion must proceed regardless.
- disk
- Manage disk in the file system, including adding, deleting, and retrieving disk details. When a
disk is assigned to a file system, a file system descriptor is written on each disk. The file system
descriptor is written at a fixed position on each of the disks in the file system and is used by
GPFS to identify this disk and its place in a file system. The file system descriptor contains file
system specifications and information about the state of the file system. For more information, see Use of disk storage and file structure within a GPFS file system.
- add {FilesystemName}
- Adds a disk to the specified file system. The file system does not need to be mounted, and it
can be in use during the operation. The actual number of disks available in your file system might
be limited by other products you installed, apart from IBM Storage Scale. For more information, see
the individual product documentation. To add disks to a file system, select one of the following methods:
- Create new disks with the scalectl NSD create command.
- Select disks no longer in use by any file system. You can use the scalectl nsd list command to display the disks that are not in use.
Note: Starting with Storage version 5.2.3, specifying disk information by using colon-separated disk descriptors is no longer supported.To resolve the NO_SPACE error when running this command, do one of the following actions:- Rebalance the file system.
- Run the fsck operation to deallocate unreferenced blocks.
- Create a pool with larger disks and move data from the old to the new pool.
- --block-group-factor {BlockGroupFactor}
-
Specifies the number of file system blocks that are sequentially arranged on the disk to function as a single large block. This option works only if --allow-write-affinity is set for the data pool.
- --disk-usage {localCache | dataAndMetadata | metadataOnly | dataOnly |condescend}
- Specifies the data type that is stored on the disk. The possible values are localCache, dataAndMetadata, metadataOnly, dataOnly, and condescend. The default value is dataAndMetadata.
- -g or --failure-group {FailureGroup}
- Specifies the failure group to which the disk is assigned.
- --layout-map {Cluster | Scatter}
- Specifies the block allocation map type. The possible values are cluster and scatter. The default value is cluster for small clusters and scatter for large clusters.
- -n or --name {DiskName}
- Specifies the disk name. It must match an NSD name.
- --performance-pool
- Specifies that the pool is a performance pool.
- --pool-name {PoolName}
- Specifies the name of the storage pool.
- -t or --thin-disk-type {no | nvme | scsi | auto}
- Specifies the thin disk type that is assigned to the NSD. The possible values are no, nvme, scsi, and auto. The default value is no.
- --verify-disks {yes | no}
- Verify whether the disks are IBM Storage Scale disks. The possible values are yes and no. The default value is yes.
- --write-affinity {yes | no}
- Specifies the allocation policy to be used by the node writing the data. The possible values are yes and no. The default value is yes.
- --write-affinity-depth <int32>
- Specifies the allocation policy depth. This option works only if --allow-write-affinity is set for the data pool.
- batchAdd {FilesystemName}
- Adds one or more disks to the specified file system. This command does not require the file
system to be mounted. The file system can be in use when you run this command. The actual number of
disks available in your file system might be limited by other products you installed, apart from IBM
Storage Scale. For more information, see the individual product documentation. To add disks to a
file system, select one of the following methods:
- Create new disks with the scalectl NSD create command.
- Select disks no longer in use by any file system. You can use the scalectl nsd list command to display the disks that are not in use.
Note: Starting with Storage version 5.2.3, specifying disk information by using colon-separated disk descriptors is no longer supported.To resolve the NO_SPACE error when running this command, do one of the following actions:- Rebalance the file system.
- Run the fsck operation to deallocate unreferenced blocks.
- Create a pool with larger disks and move data from the old to the new pool.
- -F or --file {FilePath}
- Specifies the JSON-formatted or GPFS stanza-formatted file path.
- --verify-disks {yes | no}
- Verify whether the disks are IBM Storage Scale disks. The possible values are yes and no. The default value is yes.
- batchDelete {FilesystemName}
- Deletes one or more file system disks that are registered to a file system. This command for
file system disks migrates all data that is otherwise lost to the remaining disks within the file
system. It then removes the disks from the file system disk descriptor, continuously preserves
replication, and optionally rebalances the file system after disk removal. This operation contains
the following two functions:
- Copy unreplicated data from the disks and remove the references to the disks.
- Rereplicate or rebalance blocks across the remaining disks.
- If the file system is replicated, you can preserve replica copies by using the default preserve-replication option or the rebalance option during disk deletion.
- The minimal-copy option does not preserve replication during disk deletion, as it copies only the minimal amount of data from the disk being deleted to ensure that each block has at least one copy. The file system is then restriped to reestablish replication.
- Previously, to move all data off a disk before deletion, you used the disk suspend API to suspend the disk targeted for deletion, followed by either the restripe (MIGRATE_ALL) or rebalance APIs to migrate data. This step is no longer required. The disk deletion operation now performs this function automatically. If the disk deletion operation fails or is canceled, the affected disks remain in a suspended state. After resolving the issue that caused the failure, you can retry the disk deletion operation.
- -n or --disk-names {DiskName}
- Specifies names of disks to remove from the file system.
- -m or --minimal-copy
- Specifies minimal copying of data to preserve data that is located only on the disk being deleted.
- -c or --node-classes {NodeClass[,NodeClass...]}
- Specifies the list of target IBM Storage Scale node classes.
- --pit-continues-on-error
- Continues repairing the remaining files, if errors are encountered in the PIT.
- -r or --preserve-replication
- Preserves replication of all files and metadata. This option is the default behavior.
- --qos-class {QosClass}
- Specifies the quality of service for IO operations to which the processing of this command is assigned.
- -b or --rebalance {lenient-round-robin | strict-round-robin | no_rebalance}
- Specifies the strategy for rebalancing the file system. The possible values are default, strict, and no_rebalance. The default value is no_rebalance.
- -N or --target-nodes {Node[,Node...]}
- Specifies the list of target IBM Storage Scale nodes.
- delete {FilesystemName}
- Deletes an existing file system disk. This command migrates all data that is otherwise lost to
the remaining disks within the file system, removes the disks from the file system disk descriptor,
preserves replication, and optionally rebalances the file system after disk removal. This command
contains the following two functions:
- Copy unreplicated data from the disks and removing the references to the disks.
- Rereplicate or rebalance blocks across the remaining disks.
- If the file system is replicated, you can preserve replica copies by using the default preserve-replication option or the rebalance option during disk deletion.
- The minimal-copy option does not preserve replication during disk deletion, as it copies only the minimal amount of data from the disk being deleted, to ensure that each block has at least one copy. The file system is then restriped to reestablish replication.
- Previously, to move all data off the disk before deletion, you used the disk suspend API to suspend all disks to be deleted and then used the restripe (MIGRATE_ALL) or rebalance APIs. This step is no longer needed as the disk deletion operation does the same function. If disk deletion fails or is canceled, the disks remain in the suspended state, and you can retry disk deletion after the issue that caused it to stop is resolved.
The operation attributes for this command are LRO, PIT and cancel operation. To run this command, you must have the RBAC permission for the delete action on the /scalemgmt/v3/filesystems/{filesystem}/disks/{disk_name} resource.
- -m or --minimal-copy
- Specifies minimal copying of data to preserve data that is located only on the disk being deleted.
- -c or --node-classes {NodeClass[,NodeClass...]}
- Specifies the list of target IBM Storage Scale node classes.
- --pit-continues-on-error
- Continues repairing the remaining files, if errors are encountered in the PIT.
- -r or --preserve-replication
- Preserves replication of all files and metadata. This option is the default behavior.
- --qos-class {QosClass}
- Specifies the quality of service for IO operations to which the processing of this command is assigned.
- -b or --rebalance {lenient-round-robin | strict-round-robin | no_rebalance}
- Specifies the strategy for rebalancing the file system. The possible values are default, strict, and no_rebalance. The default value is no_rebalance.
- -N or --target-nodes {Node[,Node...]}
- Specifies the list of target IBM Storage Scale nodes.
- get {DiskName}
- Retrieves the current configuration and state of a disk in a file system. This command can be
run against mounted or unmounted file systems. For each disk in the list, this command displays the
following information:
- Disk name
- Driver type
- Logical sector size
- Failure group
- Indicates whether the disk holds data
- Indicates whether the disk holds metadata
- Status:
- Ready: Normal status.
- Suspended or to be emptied: Indicates that data is scheduled to be migrated off this disk.
- Being emptied: Transitional status while a disk deletion is pending.
- Emptied: Indicates that data has already been migrated off this disk.
- Replacing: Transitional status for the old disk while a replacement is pending.
- Replacement: Transitional status for the new disk while a replacement is pending.
- Availability:
- Up: The disk is available for normal read and write operations.
- Down: Read and write operations cannot be done on this disk.
- Recovering: Intermediate state when a disk is coming up. IBM Storage Scale verifies and corrects data during this process. Write operations can be done while the disk is in this state, but read operations cannot because the data on the recovering disk might be stale until the mmchdisk start command completes.
- Unrecovered: The disk was not successfully brought up.
- Disk ID
- Storage pool: The storage pool to which the disk is assigned.
- Remarks: A tag is displayed if the disk is a file system descriptor replica holder, an excluded disk, or if the disk supports space reclamation.
- --fields {FieldName}
- Restricts output to the specified field names.
- list
- List existing disks that are assigned to a file system. This command can be run against mounted
or unmounted file systems. For each disk in the list, this command displays the following information:
- Disk name
- Driver type
- Logical sector size
- Failure group
- Indicates whether the disk holds data
- Indicates whether the disk holds metadata
- Status:
- Ready: Normal status.
- Suspended or to be emptied: Indicates that data is scheduled to be migrated off this disk.
- Being emptied: Transitional status while a disk deletion is pending.
- Emptied: Indicates that data has already been migrated off this disk.
- Replacing: Transitional status for the old disk while a replacement is pending.
- Replacement: Transitional status for the new disk while a replacement is pending.
- Availability:
- Up: The disk is available for normal read and write operations.
- Down: Read and write operations cannot be performed on this disk.
- Recovering: Intermediate state when a disk is coming up. IBM Storage Scale verifies and corrects data during this process. Write operations can be performed while the disk is in this state, but read operations cannot because the data on the recovering disk might be stale until the mmchdisk start command completes.
- Unrecovered: The disk was not successfully brought up.
- Disk ID
- Storage pool: The storage pool to which the disk is assigned.
- Remarks: A tag is displayed if the disk is a file system descriptor replica holder, an excluded disk, or if the disk supports space reclamation.
- --fields {FieldName}
- Restrict output to the specified field names.
- -n or --max-items {MaxItemNumber}
- Specifies the maximum number of items to list at a time.
- -x or --no-pagination
- Disables the subsequent pagination tokens on the client side.
- -p or --page-size {PageSize}
- Specifies the number of items to list per API request.
- -t or --page-token {PageToken}
- Specifies the page token that is received from previous file system list command. You can provide this page token to retrieve the next page.
- quorum
- Manages file system disk quorum settings. For more information, see
File system descriptor quorum.
- get {FilesystemName}
- Displays the quorum disk information.
- get {FilesystemName}
- Retrieves information of an existing file system. The operation attribute for this command is
fields. To run this command, you must have the RBAC permission for the
get action on the /scalemgmt/v3/filesystems/{name} resource.
- --fields {FieldName}
- Restricts output to the specified field names.
- list
- Lists attributes of multiple file systems that are registered in the cluster. The operation
attributes for this command are fields and pagination. To run this command, you must have the
RBAC permission for the get action on the
/scalemgmt/v3/filesystems/{name} resource.Note: Listing disks no longer produces a warning for ill-replicated file systems when using the IBM Storage Scale native REST API. Use mmhealth command to monitor alerts about ill-replicated file systems
- --all-domains
- Runs the list request against all possible domains that the user has access to.
- --fields {FieldName}
- Restricts output to the specified field names.
- -n or --max-items {MaxNumItemNumber}
- Specifies the maximum number of items to list at a time.
- -x or --no-pagination
- Disables subsequent pagination tokens on the client side.
- -p or --page-size {PageSize}
- Specifies the number of items to list per API request
- -t or --page-token {PageToken}
- Specifies the page token that is received from a previous fail system list call to retrieve the next page.
- manager
- Update the manager node of an existing file system. To run this command, you must have the RBAC
permission for the update action on the
/scalemgmt/v3/filesystems/{name}/manager resource.
- update
- Update the manager node of an existing file system.
- -m or --manager {MangerNodeName}
- Specifies the manager node name.
- mount {FilesystemName}
- Mounts an IBM Storage Scale file system on one or more nodes in the cluster. If no nodes are
specified, the file system is mounted only on the node where the request is issued. The operation
attributes for this command are LRO, target nodes, and remote errors. To run this command, you must
have the RBAC permission for the mount action on the
/scalemgmt/v3/filesystems:mount resource.
- -o or --mount-options {MountOptions}
- Specifies mount options.
- -T or --mount-point {MountPoint}
- Specifies the target mount point for file system.
- -c or --node-classes {NodeClass[,NodeClass...]}
- Specifies the list of IBM Storage Scale node classes.
- -N or --target-nodes {Node[,Node...]}
- Specifies the list of target IBM Storage Scale nodes.
- mountAll
- Mounts all existing file systems. If no nodes are specified, the file system is mounted only on
the node against which the request is issued. The operation attributes for this command are LRO,
target nodes, and batch operation. To run this command, you must have the RBAC permission for the
mount action on the /scalemgmt/v3/filesystems:mount resource.
- -o or --mount-options {MountOptions}
- Specifies the mount-options for mount.
- -c or --node-classes {NodeClass[,NodeClass...]}
- Specifies the list of IBM Storage Scale node classes.
- -N or --target-nodes {Node[,Node...]}
- Specifies the list of target IBM Storage Scale nodes.
- mountState {FilesystemName}
- Retrieves mount state of the file system. The mount state includes information about the type of
the mount, and the nodes where the file system is mounted.
- -C or --cluster-name {ClusterName}
- Specifies the cluster for which mount information is requested.
- --fields {FieldName}
- Restricts output to the specified field names.
- pool
- Gets, lists, and updates the storage pool.
- get {FilesystemName} {PoolName}
- Retrieves information of an existing storage pool.
- --fields {FieldName}
- Restricts output to the specified field names.
- list
- List existing storage pools in a file system.
- --fields {FieldName}
- Restricts output to the specified field names.
- -n or --max-items {MaxItemNumber}
- Specifies the maximum number of items to list at a time.
- -x or --no-pagination
- Disables subsequent pagination tokens on the client side.
- -p or --page-size {PageSize}
- Specifies the number of items to list per API request.
- -t or --page-token {PageToken}
- Specifies the page token that is received from previous file system list command. You can provide this page token to retrieve the next page.
- update {FilesystemName} {PoolName}
- Updates an existing file system.
- --block-group-factor {BlockGroupFactor}
- Specifies the number of file system blocks that are laid out sequentially on disk to function as a single large block. This option is effective only if --allow-write-affinity is set for the data pool.
- --write-affinity-depth {WriteAffinityDepth}
- Specifies the allocation policy to use. This option is effective only if --allow-write-affinity is set for the data pool.
- remote
- Adds and manages the remote file system.
- add
- Adds a remote file system. Use this command to add file systems that belong to another IBM
Storage Scale cluster that are known to the nodes of the accessing cluster.
- -A or --auto-mount {yes | no | automount}
- Specifies when to mount the file system. The possible values are automount, yes, and no. The default value is yes.
- -o or --mount-options {MountOptions}
- Specifies mount options for the file system.
- -T or --mount-point {MountPoint}
- Specifies the default mount point for the file system.
- -P or --mount-priority {MountPriority}
- Specifies the mount priority for the file system.
- -n or --name {DeviceName}
- Specifies the device name under which the file system is known in the accessing cluster.
- -C or --owning-cluster {OwningClusterName}
- Specifies the owning cluster to which the file system belongs.
- --remote-name {FilesystemName}
- Specifies the name of the file system as it is known in the owning cluster.
- update
- Updates the information associated with a remote file system.
- -A or --auto-mount {yes | no | automount}
- Specifies when to mount the file system. The possible values are automount, yes, and no. The default value is yes.
- -o or --mount-options {MountOptions}
- Specifies mount options for the file system.
- -T or --mount-point {MountPoint}
- Specifies the default mount point for the file system.
- -P or --mount-priority {MountPriority}
- Specifies the mount priority for the file system.
- -C or --owning-cluster {OwningClusterName}
- Specifies the owning cluster to which the file system belongs.
- --remote-name {FilesystemName}
- Specifies the name of the file system as it is known in the owning cluster.
- delete {FilesystemName}
- Deletes a remote file system.
- -p or --permanently-damaged
- Indicates that the remote file system is permanently damaged and that file system deletion must proceed.
- snapshot
- Creates, deletes, and get details about snapshots.
- batchDelete {FilesystemName}
- Deletes one or more global snapshots. The operation attribute for this command is LRO. To run
this command, you must have the RBAC permission for the delete action on the
/scalemgmt/v3/filesystems/{filesystem}/snapshots:batchDelete resource.
For more information, see Creating and maintaining snapshots of file systems.For more information, see the Creating and maintaining snapshots of file systems section in the IBM Storage Scale: Administration Guide.
- -I or --input {:snapshot1[,:snapshot2]}
- Specifies the global snapshots to delete.
- -c or --node-classes {NodeClass[,NodeClass...]}
- Specifies to list of IBM Storage Scale node classes.
- --pit-continues-on-error
- Continues removing the remaining files, if errors are encountered in the PIT phase that does the user file deletion.
- -N or --target-nodes {Node[,Node...]}
- Specifies the list of target IBM Storage Scale nodes.
- create {FilesystemName}
- Creates a global file system snapshot at a single point of time. The snapshot does not copy
system data or existing snapshots. This function enables backup or mirroring programs to run
concurrently with user updates to ensure a consistent copy of the file system at the time of
creation. Snapshots also serve as an online backup solution, allowing for easy recovery from common
issues, such as accidental file deletion, and enabling comparisons with previous file versions.
A global snapshot is an exact copy of the changed data in the active files and directories of a file system. File system snapshots are read-only and appear in the .snapshots directory that is located in the root directory of the file system. The files and attributes of the file system can be modified only in the active copy.
To delete a global snapshot, use scalectl filesystem snapshot delete or scalectl filesystem snapshot batchDelete.
Because global snapshots are not full, independent copies of the entire file system, they do not provide protection against media failures. For more information about protecting against media failures, see Recoverability considerations.For more information about protecting against media failures, see the Recoverability considerations section in the IBM Storage Scale: Concepts, Planning, and Installation Guide.
For more information, see Creating and maintaining snapshots of file systems.For more information, see the Creating and maintaining snapshots of file systems section in the IBM Storage Scale: Administration Guide.
The operation attribute for this command is LRO. To run this command, you must have the RBAC permission for the create action on the /scalemgmt/v3/filesystems resource.
- --expiration-time {YYYY-MM-DDTHH:MM:SSZ}
- Specifies the expiration time of the snapshot in RFC 3339 format. For example, YYYY-MM-DDTHH:MM:SSZ.
- -n or --name {SnapshotName}
- Specifies the snapshot name.
- delete {FilesystemName} {SnapshotName}
- Deletes a global snapshot. After the delete subcomamnd is issued, the
snapshot is marked for deletion and cannot be recovered.
If the node from which delete was issued or the file system manager node fails, the snapshot might not be fully deleted. The list or get subcommand displays these snapshots with a status of DeleteRequired. To complete the deletion, reissue delete subcommand from another node, or allow the snapshot to be automatically removed by a later scalectl fileset snapshot delete command. A snapshot in this state cannot be accessed.
Any open files in the snapshot are forcibly closed. The user receives an ESTALE error on the next file access.
If a snapshot contains file clones, you must delete the file clones or split them from their clone parents before deleting the snapshot. Use the mmclone split or mmclone redirect command to split file clones. Use a regular delete (rm) command to delete a file clone. If a snapshot containing a clone parent is deleted, any attempt to read a block that references the missing snapshot returns an error. A policy file can be created to help determine whether a snapshot contains file clones. For more information about file clones and policy files, see File clones and policy files.
The operation attribute for this command is LRO. To run this command, you must have the RBAC permission for the get action on the /scalemgmt/v3/filesystems/{filesystem}/snapshots/{snapshot_name} resource.
- -c or --node-classes {NodeClass[,NodeClass...]}
- Specifies the list of IBM Storage Scale node classes.
- --pit-continues-on-error
- Continues removing the remaining files, if errors are encountered in the PIT phase that does the user file deletion.
- -N or --target-nodes {Node[,Node...]}
- Specifies the list of target IBM Storage Scale nodes.
- get {FilesystemName} {SnapshotName}
- Retrieves information of an existing global snapshot. The operation attribute for this command
is fields. To run this command, you must have the RBAC permission for the
get action on the
/scalemgmt/v3/filesystems/{filesystem}/snapshots/{snapshot_name}
resource.Note: For snapshots that are created without the defined retention period, the expiration time is the same as its creation time.
- --fast
- Enables a faster method to calculate the storage used by the snapshot, reducing system performance impact.
- --view {basic | data}
- Specifies the view for snapshot contents. The possible values are basic and data. The default value is basic.
- list
- Lists global snapshots in the file system. The operation attributes for this command are LRO,
fields, and pagination. To run this command, you must have the RBAC permission for the
get action on the
/scalemgmt/v3/filesystems/{filesystem}/snapshots resource.Note: For snapshots that are created without a defined retention period, the expiration time is the same as the creation time.
- --all-domains
- Runs the list request against all possible domains that the user has access to.
- --fast
- Enables a faster method to calculate the storage used by the snapshot, reducing system performance impact.
- -n or --max-items {MaxItemNumber}
- Specifies the maximum number of items to list at a time.
- -x or --no-pagination
- Disables subsequent pagination tokens on the client side.
- -p or --page-size {PageSize}
- Specifies the number of items to list per API request.
- -t or --page-token {PageToken}
- Specifies the page token that is received from previous file system list command. You can provide this page token to retrieve the next page.
- --view {basic | data}
- Specifies the view for snapshot contents. The possible values are basic and data. The default value is basic.
- listSnapdir {FilesystemName}
- Displays the current snapshot directory settings.
- unmount {FilesystemName}
- Unmounts an existing file system from one or more nodes in the cluster. If no nodes are
specified, the file system is unmounted only from the node where request was issued. To force
unmount a file system from a cluster, use the --cluster-name option. The
operation attributes for this command are LRO, target nodes, and remote errors. To run this command,
you must have the RBAC permission for the unmount action on the
/scalemgmt/v3/filesystems:unmount resource.Note: If a file system is unmounted forecefully by using the cluster option, affected nodes might still show the file system as mounted, but data is inaccessible. System administrators must issue a manual unmount command to synchronize the state.
- -C or --cluster-name {ClusterName}
- Specifies the cluster from which to unmount the file system.
- -f or --force
- Forcefully mounts the file system even if it is in use.
- -c or --node-classes {NodeClass[,NodeClass...]}
- Specifies to list of IBM Storage Scale node classes.
- -N or --target-nodes {Node[,Node...]}
- Specifies the list of target IBM Storage Scale nodes.
- unmountAll
- Unmounts all file system on one or more nodes in the cluster. If no nodes are specified, the
file system is unmounted only from the node where request was issued. To force unmount a file system
from a cluster, use the --cluster-name option. The operation attributes for
this command are LRO, target nodes, and, batch operation. To run this command, you must have the
RBAC permission for the unmount action on the
/scalemgmt/v3/filesystems:unmount resource.Note: If a file system is unmounted forcefully by using the cluster option, affected nodes might still show the file system as mounted, but data is inaccessible. System administrators must issue a manual unmount command to synchronize the state.
- -f or --force
- Forcefully mounts the file system even if it is in use.
- -c or --node-classes {NodeClass[,NodeClass...]}
- Specifies the list of IBM Storage Scale node classes.
- -N or --target-nodes {Node[,Node...]}
- Specifies the list of target IBM Storage Scale nodes.
- update
- Update the attributes of an IBM Storage Scale file system. You must unmount the file system to
update the name, default_mount_point,
automatic_mount_option, drive_letter,
dmapi_enabled, and maintenance_mode attributes. The
mmfsd daemon must be active to update most of the attributes but an exception to
this requirement is available for some attributes. Use the files update_mask to
control the attributes that need to be updated when running the update command. The operation
attribute for this command is update mask. To run this command, you must have the RBAC permission
for the update action on the
/scalemgmt/v3/filesystems/{name} resource.
- -k or --acl-semantics {posix | nfs4 | all}
- Specify the authorization type that are supported by the file system. The possible values are posix, nfs4, and all. The default value is all.
- -a or --auto-inode-limit {yes | no}
- Specifies whether to automatically increase the maximum number of inodes per inode space in the file system. The possible values are yes and no. The default value is no.
- -A or --auto-mount {yes | no | automount}
- Specifies when the file system is mounted. The possible values are yes, no, and automount. The default value is yes.
- -r or --default-data-replicas {DefaultDataReplicas}
- Specifies the default number of copies of each data block for a file.
- -m or --default-metadata-replicas {DefaultMetadataReplicas}
- Specifies the default number of copies of inodes, directories, and, indirect blocks for a file.
- -T or --default-mount-point {DefaultMountPoint}
- Specifies the default mount point for the file system.
- -t or --drive-letter {DriveLetter}
- Specifies the drive letter to use when the file system is mounted on windows.
- -z or --enable-dmapi {yes | no}
- Enables or disables DMAPI on the file system. The possible values are yes and no. The default value is no.
- -Q or --enable-quotas {yes | no}
- Specifies whether quotas are activated automatically when the file system is mounted. The possible values are yes and no. The default value is no.
- -E or --exact-mtime {yes | no}
- Specify whether to report exact mtime values. The possible values are yes and no. The default value is yes.
- --filesetdf {yes | no}
- Specifies whether df reports information at the independent file set level. The possible values are yes and no. The default value is no.
- -c or --flush-on-close {yes | no}
- Specifies whether disk buffers are flushed automatically when closing files opened for writing. on the device. The possible values are yes and no. The default value is no.
- -I or --inode-limit {InodesLimit}
- Specifies the maximum number of files in the file system.
- --inode-segment-manager {yes | no}
- Enables or disables the inode segment manager. The possible values are yes and no. The default value is yes.
- -x or --inodes-prealloc {NumInodes}
- Specifies the number of inodes that are reallocated by the system immediately.
- -D or --lock-semantics {nfs4 | posix}
- Specifies whether deny-write open lock blocks writes, which is required for nfsv4. The possible values are posix and nfs4. The default value is nfs4.
- -l or --log-replicas {LogReplicas}
- Specifies the number of recovery log replicas.
- -L or --logfile-size {LogFileSize}
- Specifies the size of internal log files.
- -P or --mount-priority {Priority}
- Specifies the mount priority for the file system.
- -n or --name {FilesystemName}
- Specifies the name of the file system.
- --nfs4-owner-write-acl {yes | no}
- Specifies the NFSv4 implicit owner WRITE_ACL permission. The possible values are yes and no. The default value is yes.
- -N or --num-nodes {NumNodes}
- Specifies the estimated number of nodes that can mount the file system in the local cluster and all remote clusters.
- --perfileset-quotas {yes | no}
- Sets the scope of user and group quota limit checks to the individual file set level. The possible values are yes and no. The default value is no.
- -K or --strict-replication {no | whenpossible | always}
- Specifies whether strict replication is enforced. The possible values are no, whenpossible, and always. The default value is whenpossible.
- -S or --suppress-atime {yes | no}
- Controls how the atime file attribute is updated. The possible values are yes, no, and relatime. The default value is relatime.
- -V or --version {Version}
- Species the version of the file system.
- -W or --write-cache-threshold {HAWCThreshold}
- Specifies the maximum length (in bytes) of write requests that are initially buffered in the highly available write cache before being written back to primary storage.
Global flags
- --bearer
- If true, reads the
OIDC_TOKEN
from the environment and sends it as the authorization bearer header for the request. Use this flag with the--url
option. - --cert {Certificate}
- Specifies the path to the client certificate file for authentication.
- --debug {Filepath[="stderr"]}
- Enables the debug logging for the current request. Accepts an absolute file path to store logs
by using
--debug=<file>
. If no file path is specified, logs are sent tostderr
. - -h or --help
- Lists the help for scalectl commands.
- --domain {DomainName}
- Sets the domain for the request. The default value is StorageScaleDomain.
- --insecure-skip-tls-verify
- If true, skips to verify the server certificate for validity. This option makes HTTPS connections insecure.
- --json
- Displays output in JSON format.
- --key {PrivateKeyFile}
- Specifies the path to the client certificate private key file for authentication.
- --url {ip_address}
- Sends the request over HTTPS to the specified endpoint
<FQDN/IP>:<port>
. For IPv6 address, use square brackets. For example,[IPv6]:<port>
. If no port specified, 46443 is used by default. - --version
- Specifies the scalectl build information.
Exit status
- 0
- Successful completion.
- nonzero
- A failure occurred.
Security
You must have the specific role-based access control (RBAC) permission to run the command. For more information, see Role-based access control.
Examples
- To list file systems, issue the following command:
A sample output is as follows:scalectl fs list --fields name,default_mount_point
attribute | value ================================== Filesystem name | fs0 Default Mount Point | /gpfs/fs0 attribute | value ================================== Filesystem name | fs1 Default Mount Point | /gpfs/fs1 attribute | value ================================== Filesystem name | fs2 Default Mount Point | /gpfs/fs2
- To list file systems with pagination and page token, issue the following command:
A sample output is as follows:scalectl fs list --fields name --max-items 1
attribute | value ========================== Filesystem name | fs0
A sample output is as follows:scalectl fs list --fields name -p 1 --no-pagination
attribute | value ========================== Filesystem name | fs0 NEXT PAGE TOKEN: bGFuY2Vsb3QtNDEub3BlbnN0YWNrbG9jYWw6ZnMx
A sample output is as follows:scalectl fs list --fields name -p 1 --no-pagination --page-token bGFuY2Vsb3QtNDEub3BlbnN0YWNrbG9jYWw6ZnMx
attribute | value ========================== Filesystem name | fs1 NEXT PAGE TOKEN: bGFuY2Vsb3QtNDEub3BlbnN0YWNrbG9jYWw6ZnMy
A sample output is as follows:scalectl fs list --fields name -p 1 --no-pagination --page-token bGFuY2Vsb3QtNDEub3BlbnN0YWNrbG9jYWw6ZnMy
attribute | value ========================== Filesystem name | fs2 NEXT PAGE TOKEN: <END OF LIST>
- To create a file system, issue the following command:
A sample output is as follows:scalectl fs create --name fs2 --disks disk5 --auto-mount no --verify-disks yes
Disks up to size 274.49 GB can be added to storage pool system. Creating Inode File Creating Allocation Maps Creating Log Files 3 % complete on Thu Oct 31 21:07:45 2024 100 % complete on Thu Oct 31 21:07:48 2024 Clearing Inode Allocation Map Clearing Block Allocation Map Formatting Allocation Map for storage pool system 95 % complete on Thu Oct 31 21:07:54 2024 100 % complete on Thu Oct 31 21:07:54 2024 failed to create default mount point location at '/gpfs/fs2'. Make sure the directory is created for mounting the filesystem Successfully created filesystem 'fs2'
- To update a file system, issue the following command:
A sample output is as follows:scalectl fs update fs2 -A yes
Filesystem attributes successfully updated.
- To delete a file system, issue the following command:
A sample output is as follows:scalectl fs delete fs2
All data on the following disks of fs2 will be destroyed: disk5 Successfully deleted filesystem 'fs2'
- To mount the specified file system, issue the following command:
A sample output is as follows:scalectl fs update fs2 -A yes
Successfully mounted filesystem fs0
- To mount all file systems, issue the following command:
A sample output is as follows:scalectl fs mountAll -N all
Job Id MTpjZDFjY2MwNy1lOGM2LTRjMDYtOWUxOC0yOThiNGU5N2U0NjM= Operation MountAllFilesystemsRequest Status Done Request Time 2024-10-31T21:10:06Z Last Update Time 2024-10-31T21:10:06Z Completion Time 2024-10-31T21:10:06Z filesystem | node | ============================================================================================================================================================== fs0 | lancelot-41.openstacklocal | succeeded fs1 | lancelot-41.openstacklocal | succeeded fs2 | lancelot-41.openstacklocal | succeeded fs0 | lancelot-42.openstacklocal | connection error: desc = "transport: Error while dialing: dial tcp 10.0.100.25:50052: connect: connection refused" fs1 | lancelot-42.openstacklocal | connection error: desc = "transport: Error while dialing: dial tcp 10.0.100.25:50052: connect: connection refused" fs2 | lancelot-42.openstacklocal | connection error: desc = "transport: Error while dialing: dial tcp 10.0.100.25:50052: connect: connection refused"
- To unmount the specified file system, issue the following command:
A sample output is as follows:scalectl fs unmount fs0
Successfully unmounted filesystem fs0
- To unmount all file systems, issue the following command:
A sample output is as follows:scalectl fs unmountAll -N all
Job Id MToyM2RhYTYxYi1lY2ZlLTQ5NGYtYmUxNS1kZDBmOWM2NmMwNmY= Operation UnmountAllFilesystemsRequest Status Done Request Time 2024-10-31T21:17:38Z Last Update Time 2024-10-31T21:17:38Z Completion Time 2024-10-31T21:17:38Z filesystem | node | ============================================================================================================================================================== fs0 | lancelot-41.openstacklocal | succeeded fs1 | lancelot-41.openstacklocal | succeeded fs2 | lancelot-41.openstacklocal | succeeded fs0 | lancelot-42.openstacklocal | connection error: desc = "transport: Error while dialing: dial tcp 10.0.100.25:50052: connect: connection refused" fs1 | lancelot-42.openstacklocal | connection error: desc = "transport: Error while dialing: dial tcp 10.0.100.25:50052: connect: connection refused" fs2 | lancelot-42.openstacklocal | connection error: desc = "transport: Error while dialing: dial tcp 10.0.100.25:50052: connect: connection refused"
- To get the mount state of a file system, issue the following command:
A sample output is as follows:scalectl fs mountState fs0
Mounted true Manager Node lancelot-41 Owning Cluster lancelot-41.openstacklocal Nodes with FS Mounted 2 Mounts: local device name | real device name | Node IP | Node Name | Cluster Name | Mount Mode | | 10.0.100.70 | lancelot-41 | lancelot-41.openstacklocal | INTERNAL_MOUNT | | 10.0.100.25 | lancelot-42 | lancelot-41.openstacklocal | RW_MOUNT
- To list existing disks in a file system, issue the following command:
A sample output is as follows:scalectl fs disks list fs0
attribute | value ========================================= Disk name | disk1 File System name | fs0 Failure group | -1 Remarks | desc Thin Disk | no Driver Type | nsd Sector Size | 512 has metadata | true has data | true Status | ready Availability | up Id | 1 Disk Usage | dataAndMetadata Storage Pool | system Descriptor Replica | true Excluded | false Auto Resume | false Size (MiB) | 30,720 Uid | 4664000A:67193525
- To list all disks of a file system, issue the following command:
A sample output is as follows:scalectl fs disks get fs0 disk1
attribute | value ========================================= Disk name | disk1 File System name | fs0 Failure group | -1 Remarks | desc Thin Disk | no Driver Type | nsd Sector Size | 512 has metadata | true has data | true Status | ready Availability | up Id | 1 Disk Usage | dataAndMetadata Storage Pool | system Descriptor Replica | true Excluded | false Auto Resume | false Size (MiB) | 30,720 Uid | 4664000A:67193525 attribute | value ========================================= Disk name | disk2 File System name | fs0 Failure group | -1 Remarks | desc Thin Disk | no Driver Type | nsd Sector Size | 512 has metadata | false has data | true Status | ready Availability | up Id | 2 Disk Usage | dataOnly Storage Pool | datapool1 Descriptor Replica | true Excluded | false Auto Resume | false Size (MiB) | 30,720 Uid | 4664000A:671AB5EC attribute | value ========================================= Disk name | disk3 File System name | fs0 Failure group | -1 Remarks | desc Thin Disk | no Driver Type | nsd Sector Size | 512 has metadata | false has data | true Status | ready Availability | up Id | 3 Disk Usage | dataOnly Storage Pool | datapool2 Descriptor Replica | true Excluded | false Auto Resume | false Size (MiB) | 30,720 Uid | 4664000A:671AB602
- To add a disk to a file system, issue the following command:
A sample output is as follows:scalectl fs disks add fs0 --name disk5 --pool-name datapool1 --disk-usage dataOnly
Adding disk to storage pool datapool1. Storage pool attributes will not be updated The following disks of fs0 will be formatted on node lancelot-41: disk5: size 30720 MB Extending Allocation Map Checking Allocation Map for storage pool datapool1 92 % complete on Fri Nov 1 16:14:31 2024 100 % complete on Fri Nov 1 16:14:31 2024 Completed adding disks to file system fs0. Addition of disk 'disk5' to filesystem 'fs0' is complete
- To add disks to a storage pool system, issue the following command:
A sample output is as follows:scalectl fs disks batchAdd fs1 -F ~/stanzas/disks/disk2.stanza
The following disks of fs1 will be formatted on node lancelot-41: disk2: size 30720 MB Extending Allocation Map Checking Allocation Map for storage pool system 19 % complete on Wed Feb 5 15:46:23 2025 36 % complete on Wed Feb 5 15:46:28 2025 94 % complete on Wed Feb 5 15:46:33 2025 100 % complete on Wed Feb 5 15:46:34 2025 Completed adding disks to file system fs1. Job Id MTpkNzllMWQ0Yi0yMDY3LTQxMjYtOTBhOC0yNzQxZmExM2I4NGI= Operation BatchAddFilesystemDisksRequest Status Done Request Time 2025-02-05T14:46:17Z Last Update Time 2025-02-05T14:46:34Z Completion Time 2025-02-05T14:46:34Z Disk Name | Status ===================== disk2 | Added
- To delete a disk from a file system, issue the following command:
A sample output is as follows:scalectl fs disks delete fs0 disk5
Scanning file system metadata, phase 1: inode0 files 100 % complete on Fri Nov 1 16:18:12 2024 Scan completed successfully. Scanning file system metadata, phase 2: block allocation maps 100 % complete on Fri Nov 1 16:18:12 2024 Scanning file system metadata for datapool1 storage pool 100 % complete on Fri Nov 1 16:18:12 2024 Scanning file system metadata for datapool2 storage pool 100 % complete on Fri Nov 1 16:18:12 2024 Scan completed successfully. Scanning file system metadata, phase 3: reserved thin-provisioning Scan completed successfully. Scanning file system metadata, phase 4: inode allocation map 100 % complete on Fri Nov 1 16:18:12 2024 Scan completed successfully. Scanning file system metadata, phase 5: fileset metadata files 100 % complete on Fri Nov 1 16:18:12 2024 Scan completed successfully. Scanning user file metadata ... 64.60 % complete on Fri Nov 1 16:18:32 2024 ( 3084288 inodes with total 13494 MB data processed) 100.00 % complete on Fri Nov 1 16:18:42 2024 ( 4979712 inodes with total 20898 MB data processed) Scan completed successfully. Checking Allocation Map for storage pool system 75 % complete on Fri Nov 1 16:20:32 2024 100 % complete on Fri Nov 1 16:20:36 2024 Checking Allocation Map for storage pool datapool1 91 % complete on Fri Nov 1 16:20:41 2024 100 % complete on Fri Nov 1 16:20:42 2024 Checking Allocation Map for storage pool datapool2 89 % complete on Fri Nov 1 16:20:47 2024 100 % complete on Fri Nov 1 16:20:48 2024 tsdeldisk completed. Deletion of disk 'disk5' in filesystem 'fs0' is complete
- To list existing storage pools in a file system, issue the following command:
A sample output is as follows:scalectl fs disks quorum get fs0
Quorum Disks | Read Quorum Value Old | Write Quorum Value Old | Read Quorum Value New | Write Quorum Value New | Migration In Progress ===================================================================================================================================== 3 | 2 | 2 | 2 | 2 | false
- To list existing storage pools in a file system, issue the following command:
A sample output is as follows:scalectl fs disks quorum get fs0
Quorum Disks | Read Quorum Value Old | Write Quorum Value Old | Read Quorum Value New | Write Quorum Value New | Migration In Progress ===================================================================================================================================== 3 | 2 | 2 | 2 | 2 | false
- To list the specified storage pool in a file system, issue the following command:
A sample output is as follows:scalectl fs pools get fs0 datapool1
Name | Id | Filesystem | Block Size | Layout Map | Write Affinity | Write Affinity Depth | Block Group Factor | Max Disk Size | Performance Pool =================================================================================================================================================== datapool1 | 65537 | fs0 | 4194304 | cluster | no | 0 | 1 | 294733742080 | false To list the specified storage pool in a file system, issue the following command:
- To update the specified storage pool in a file system, issue the following command:
A sample output is as follows:scalectl fs pools update fs0 datapool1 --write-affinity-depth 2
Name | Id | Filesystem | Block Size | Layout Map | Write Affinity | Write Affinity Depth | Block Group Factor | Max Disk Size | Performance Pool =================================================================================================================================================== datapool1 | 65537 | fs0 | 4194304 | cluster | no | 2 | 1 | 294733742080 | false
- To add a remote file system, issue the following
command:
A sample output is as follows:scalectl filesystem remote add --name remotefs1 -T /gpfs/remotefs1 --remote-name fs1 --owning-cluster owningCluster1
Successfully added remote file system 'remotefs1'.
- To update a remote file system, issue the following
command:
A sample output is as follows:scalectl filesystem remote update remotefs1 -A no -T /gpfs/newmount
Successfully updated remote file system 'remotefs1'.
- To delete a remote file system, issue the following
command:
A sample output is as follows:scalectl filesystem remote delete remotefs1
Successfully deleted remote file system ‘remotefs1’.
- To list the current snapshot directory settings, issue the following
command:
A sample output is as follows:scalectl filesystem snapshot listSnapdir fs1
Fileset snapshot directory for "fs1" is ".snapshots" (all directories) Global snapshot directory for "fs1" is ".snapshots" in all filesets
See also
- scalectl cluster command
- scalectl node command
- scalectl nsd command
- scalectl fileset command
- scalectl filesystem command
- Long-running operations
- Views and fields options
- Pagination
- Update masks in IBM Storage Scale native REST API
- Target nodes and node classes field in IBM Storage Scale native REST API
Location
/usr/lpp/mmfs/bin