mmafmctl command
This command is for various operations and reporting information on all filesets. It is recommended to read the IBM Storage Scale: Administration Guide AFM and AFM Disaster Recovery chapters along with this manual for detailed description of the functions.
Synopsis
To use the AFM DR functions correctly, use all commands enlisted in this chapter in accordance with the steps described in the AFM-based DR chapter in IBM Storage Scale: Concepts, Planning, and Installation Guide.
AFM read only mode is referred as RO, single writer mode is referred as SW, independent writer mode is referred as IW, and local update mode is referred as LU in this manual.
mmafmctl Device {resync | expire | unexpire | stop | start |
resumeRequeued} -j FilesetName
or
mmafmctl Device checkUncached -j FilesetName
or
mmafmctl Device checkDirty -j FilesetName [--dirty-data [-Y]]
or
mmafmctl Device {setlocal | resetlocal} -j FilesetName [--path FilePath]
or
mmafmctl Device getstate -j FilesetName [--read-stats | --write-stats]
or
mmafmctl Device flushPending [-j FilesetName [--list-file ListFile]]
[-s LocalWorkDirectory]
or
mmafmctl Device failover -j FilesetName
--new-target NewAfmTarget [--target-only] [-s LocalWorkDirectory]
or
mmafmctl Device prefetch -j FilesetName [-s LocalWorkDirectory]
[--retry-failed-file-list | --enable-failed-file-list]
[{--directory LocalDirectoryPath [--skip-dir-list-file SkipDirListfile] |
--dir-list-file DirListFile [--policy]} [--nosubdirs]]
[{--list-file ListFile | --home-list-file HomeListFile} [--policy]]
[--home-inode-file PolicyListFile] [--outband]
[--home-fs-path HomeFileSystemPath] [--delete]
[--metadata-only] [--gateway Node] [--empty-ptrash]
[--readdir-only] [--force] [--prefetch-threads nThreads]
or
mmafmctl Device getOutbandList -j FilesetName --path Path
mmafmctl Device evict -j FilesetName
[--safe-limit SafeLimit] [--order {LRU | SIZE}]
[--log-file LogFile] [--filter Attribute=Value ...]
[--list-file Listfile] [--file FilePath]
or
mmafmctl Device failback -j FilesetName {{--start --failover-time Time} | --stop}
[-s LocalWorkDirectory]
or
mmafmctl Device failoverToSecondary -j FilesetName [--norestore | --restore]
or
mmafmctl Device convertToPrimary -j FilesetName
[--afmtarget Target {--inband | --secondary-snapname SnapshotName}]
[--check-metadata | --nocheck-metadata ] [--rpo RPO] [-s LocalWorkDirectory]
or
mmafmctl Device convertToSecondary -j FilesetName --primaryid PrimaryId [--force]
or
mmafmctl Device changeSecondary -j FilesetName --new-target NewAfmTarget
[--target-only | --inband] [-s LocalWorkDirectory]
or
mmafmctl Device replacePrimary -j FilesetName
or
mmafmctl Device failbackToPrimary -j FilesetName {--start | --stop} [--force]
or
mmafmctl Device {applyUpdates | getPrimaryId} -j FilesetName
Availability
Available on all IBM Storage Scale editions. Available on AIX® and Linux®.
Description
The usage of options of this command for different operations on both AFM (RO/SW/IW/LU) filesets and AFM primary/secondary filesets are explained with examples.
File system should be mounted on all gateway nodes for mmafmctl functions to work.
Parameters
- Device
- Specifies the device name of the file system.
- -j FilesetName
- Specifies the fileset name.
- -Y
- Displays the command output in a parseable format with a colon (:) as a field
delimiter. Each column is described by a header.Note: Fields that have a colon (:) are encoded to prevent confusion. For the set of characters that might be encoded, see the command documentation of mmclidecode. Use the mmclidecode command to decode the field.
- -s LocalWorkDirectory
- Specifies the temporary working directory.
-
This section describes:
mmafmctl Device {resync | expire | unexpire | stop | start
|
resumeRequeued} -j FilesetName- resync
- This option is available only for SW cache. In case of inadvertent changes made at home or
cloud object storage of an SW fileset, such as delete of a file or change of data
in a file etc., the administrator can correct the home or cloud object storage or
cloud object storage by sending all contents from cache to home or cloud
object storage using this option. The limitation of this option that renamed files at home
or cloud object storage may not be fixed by resync.
Using resync requires the cache to be either in
NeedsResync
orActive
state. - expire | unexpire
- This option is available only for RO cache to manually expire or unexpire. When an RO
cache is disconnected, the cached contents are still accessible for the user. However, the
administrator can define a timeout from home or cloud object storage
beyond which access to the cached contents becomes stale. Such an event would occur automatically
after disconnection (when cached contents are no longer accessible) and is called expiration;
the cache is said to be expired. This option is used to manually force the cache state to
'Expired'. To expire a fileset manually, the afmExpirationTimeout must be set
on the fileset.
When the home or cloud object storage comes back or reconnects, the cache contents become automatically accessible again and the cache is said to un-expire. The unexpire option is used to force cache to come out of the 'Expired' state.
The manual expiration and un-expiration can be forced on a cache even when the home or cloud object storage is in a connected state. If a cache is expired manually, the same cache must be unexpired manually.
- stop
- Run on an AFM or AFM DR fileset to stop replication. You can use this command during maintenance or downtime, when the I/O activity on the filesets is stopped, or is minimal. After the fileset moves to a 'Stopped' state, changes or modifications to the fileset are not sent to the gateway node for queuing.
- start
- Run on a 'Stopped' AFM or AFM DR fileset to start sending updates to the gateway node and resume replication on the fileset.
- resumeRequeued
- This option is applicable for SW/IW and primary filesets. If there are operations in the queue that were re-queued due to errors at home or cloud object storage, the Administrator should correct those errors and can run this option to retry the re-queued operations.
-
This section describes:
mmafmctl Device getstate -j FilesetName [--read-stats | --write-stats]
- getstate
- This option is applicable for all AFM (RO/SW/IW/LU) and AFM primary filesets. It displays the
status of the fileset in the following fields:
- Fileset Name
- The name of the fileset.
- Fileset Target
- The host server and the exported path on it.
- Gateway Node
- Primary gateway of the fileset. This gateway node is handling requests for this fileset.
- Queue Length
- Current length of the queue on the primary gateway.
- Queue numExec
- Number of operations played at home or cloud object storage since the fileset
is last
Active
. - Cache State
-
- Cache states applicable for all AFM RO/SW/IW/LU filesets:
Active
,Inactive
,Dirty
,Disconnected
,Stopped
,Unmounted
- Cache states applicable for RO filesets:
Expired
- Cache states applicable for SW and IW filesets:
Recovery
,FlushOnly
,QueueOnly
,Dropped
,NeedsResync
,FailoverInProgress
- Cache states applicable for IW filesets:
FailbackInProgress
,FailbackCompleted
,NeedsFailback
- Cache states applicable for AFM primary filesets:
PrimInitInProg
,PrimInitFail
,Active
,Inactive
,Dirty
,Disconnected
,Unmounted
,FailbackInProg
,Recovery
,FlushOnly
,QueueOnly
,Dropped
,Stopped
,NeedsResync
- Cache states applicable for all AFM RO/SW/IW/LU filesets:
For more information about all cache states, see the AFM and AFM-based DR chapters in the IBM Storage Scale: Concepts, Planning, and Installation Guide.
- --write-stats
- Displays the following synchronization information at a given point of time:
- Total Written Data
- Displays the total data that is synchronized from an AFM cache fileset to the home or cloud object storage. The data is shown in bytes.
- N/w Throughput
- Displays the network throughput available at that point of time to the AFM for the data synchronization between the cache and the home or cloud object storage. The network throughput is shown in kilobytes per second.
- Total Pending Data to Write
- Displays the remaining data that needs to be synchronized to the home or cloud object storage site at a given point of time. The data is shown in bytes.
- Estimated Completion time
- Displays the approximate time that is needed to complete the on-going data synchronization with the home or cloud object storage at the given point of time. This time might vary because of other factors. The time is shown in seconds.
- --read-stats
- Displays the following synchronization information at a given point of time:
- Total Read Data
- Displays the total data that is read from the home or cloud object storage to the AFM cache fileset. The data is shown in bytes.
- N/w Throughput
- Displays the network throughput available at the point of time to the AFM for the data synchronization between the cache and the home or cloud object storage. The network throughput is shown in kilobytes per second.
- Total Pending Data to Read
- Displays the remaining data that needs to be synchronized to the home or cloud object storage site at a given point of time. The data is shown in bytes.
- Estimated Completion time
- Displays the approximate time that is needed to complete the ongoing data synchronization with the home or cloud object storage at the given point of time. This time might vary because of other factors. The time is shown seconds.
Note:- If the afmFastCreate parameter value is set to yes or AFM to cloud object storage is enabled on a fileset, the --read-stats and --write-stats options show information such as N/w Throughput, Total Pending Data, Estimated Completion time only during the recovery or resync operation. During regular operations, the --read-stats or --write-stats option shows only Total Written Data.
- During recovery event, it might take some time for AFM to collect recovery data and queue operations to the AFM gateway node. The synchronization status is not shown until data is queued to the AFM gateway and the write operation is synchronized to the home or cloud object storage.
- This section describes:
mmafmctl Device checkUncached -j FilesetName
- checkUncached
-
This option is available for AFM RO, SW, IW, and LU modes, and AFM to cloud object storage filesets. This option generates two separate files that contain a list of uncached files and uncached directories. The list file and the directories file contain data that is not brought to the cache from the home or cloud object storage. After data is brought to the cache, AFM removes the uncached flag from data.
A files list contains entries of all the files that exist in the AFM fileset. These files are not yet synchronized from the home or cloud object storage to the cache, that is, data blocks are not downloaded in the cache.
A directories list contains entries of all the directories that exist in the AFM fileset. These directories are not yet synchronized from the home or cloud object storage to the cache, that is, data blocks are not downloaded in the cache.
The files in the files list can be looked to perform prefetch of files or directories from the home or cloud object storage to the cache.
- This section describes:
mmafmctl Device checkDirty -j FilesetName [--dirty-data [-Y]]
- checkDirty
-
This option is available for AFM SW, IW, LU, and primary modes, and AFM to cloud object storage filesets. This option generates two separate files that contain a list of dirty files and dirty directories, which are not yet synchronized from the cache to the home or cloud object storage and are marked as dirty on the cache. After data is synchronized to the home or cloud object storage, AFM removes the dirty flag.
A dirty file list contains entries of all the files that exist in the AFM fileset. These files are not yet synchronized from the cache to the home or cloud object storage, that is, data blocks are not uploaded to the home or cloud object storage.
A dirty directories list file contains entries of all the directories that exist in the specific AFM fileset. These directories are not yet fully synchronized from the cache to the home, that is, data blocks are not uploaded to the home.
- --dirty-data
- The --dirty-data parameter can be used along with the
checkDirty option. When used with checkDirty
option, it does not print the list of dirty files and directories. This usage prints the total
amount of data (in bytes) inside the AFM fileset which is dirty that is not yet synchronized with
the home. This parameter runs query and find all the dirty data that lives inside the AFM
filesetlinked paths. This dirty data is reported in bytes. The
-Y
parameter prints the data in colon ':' separated format.
-
This section describes:
mmafmctl Device flushPending [-j FilesetName [--list-file ListFile]] [-s LocalWorkDirectory]
- flushPending
- Flushes all point-in-time pending messages in the normal queue on the fileset to home or
cloud object storage. Requeued messages and messages in the priority queue for
the fileset are not flushed by this command.
When --list-file ListFile is specified, the messages pending on the files listed in the list file are flushed to home or cloud object storage. ListFile contains a list of files that you want to flush, one file per line. All files must have absolute path names, specified from the fileset linked path. If the list of files has filenames with special characters, use a policy to generate the listfile. Edit to remove all entries other than the filenames. FlushPending is applicable for SW/IW and primary filesets.
-
This section describes:
mmafmctl Device failover -j FilesetName --new-target NewAfmTarget [--target-only] [-s LocalWorkDirectory]
This option is applicable only for SW/IW filesets. This option pushes all the data from cache to home or cloud object storage. It should be used only in case home or cloud object storage is completely lost due to a disaster and a new home or cloud object storage is being set up. Failover often takes a long time to complete; status can be checked by using the afmManualResyncComplete callback or via mmafmctl getstate command.
While RO and LU mode does not support failover but failover option can only be used along by using
--target-only
option to change the target.For changing the target to IPv6 from IPv4 address, failover can be used with --target-only option.
- --new-target NewAfmTarget
- Specifies a new home server and path, replacing the home server and
path originally set by the afmTarget parameter of the
mmcrfileset command. Specified in either of the following
formats:
ornfs://{Host|Map}/Target_Path
where:gpfs://[Map]/Target_Path
- nfs:// or gpfs://
- Specifies the transport protocol.
- Host|Map
-
- Host
- Specifies the server domain name system (DNS) name or IP address.
- Map
- Specifies the export map name. Information about Mapping is contained in the AFM Overview > Parallel data transfers section.
See the following examples:- An example of using the nfs:// protocol with a map name:
mmcrfileset fs3 singleWriter2 -p afmtarget=nfs://<map1>/gpfs/fs1/target1 -p afmmode=sw --inode-space new
- An example of using the nfs:// protocol with a host name:
mmcrfileset fs3 singleWriter2 -p afmtarget=nfs://<hostname>/gpfs/fs1/target1 -p afmmode=sw --inode-space new
- An example of using the gpfs:// protocol without a map name:
mmcrfileset fs3 singleWriter1 -p afmtarget=gpfs:///gpfs/thefs1/target1 -p afmmode=sw --inode-space new
Note: If you are not specifying the map name, a '/' is still needed to indicate the path. - An example of using the gpfs:// protocol with a map
name:
mmcrfileset fs3 singleWriter1 -p afmtarget=gpfs://<map>/gpfs/thefs1/target1 -p afmmode=sw --inode-space new
- --target-only
- This option is used to change the mount path or IP address in the target path. The new NFS server must be in the same home cluster and must have the same architecture as the existing NFS server in the target path. This option must not be used to change the target location or protocol. You must ensure that the new NFS server exports the same target path that has the same FSID.
-
This section describes:
mmafmctl Device prefetch -j FilesetName [-s LocalWorkDirectory] [--retry-failed-file-list|--enable-failed-file-list] [ {--directory LocalDirectoryPath [--skip-dir-list-file SkipDirListfile] | --dir-list-file DirListfile [--policy]} [--nosubdirs]] [{--list-file ListFile | --home-list-file HomeListFile} [--policy]] [--home-inode-file PolicyListFile] [--outband] [--home-fs-path HomeFilesystemPath] [--delete] [--metadata-only] [--gateway Node] [--empty-ptrash] [--readdir-only] [--force] [--prefetch-threads nThreads]
This option is used for pre-fetching file contents from home or cloud object storagebefore the application requests for the contents. This reduces the network delay when the application performs data transfers on file and data that is not in cache. You can also use this option to move files over the WAN when the WAN usage is low. These files might be the files that are accessed during high WAN usage. Thus, you can use this option for better WAN management.
Prefetch is an asynchronous process and you can use the fileset when prefetch is in progress. You can monitor Prefetch using the afmPrepopEnd event. AFM can prefetch the data using the mmafmctl prefetch command (which specifies a list of files to prefetch). Prefetch always pulls the complete file contents from home or cloud object storage and AFM automatically sets a file as cached when it is completely prefetched.
You can use the prefetch option to -- populate metadata
- populate data
- view prefetch statistics
Prefetch completion can be monitored using the afmPrepopEnd event.
- --retry-failed-file-list
- Allows retrying prefetch of files that failed in the last prefetch operation. The list of files
to retry is obtained from .afm/.prefetchedfailed.list under the fileset.Note: To use this option, you must enable generating a list of failed files. Add --enable-failed-file-list to the command first.
- --metadata-only
- Prefetches only the metadata and not the actual data. This is useful in migration scenarios. This option requires the list of files whose metadata you want. Hence it must be combined with a list file option.
- --enable-failed-file-list
- Turns on generating a list of files which failed during prefetch operation at the gateway node. The list of files is saved as .afm/.prefetchedfailed.list under the fileset. Failures that occur during processing are not logged in .afm/.prefetchedfailed.list. If you observe any errors during processing (before queuing), you might need to correct the errors and rerun prefetch.
- --policy
- Specifies that the
list-file
orhome-list-file
is generated using a GPFS Policy by which sequences like '\' or '\n' are escaped as '\\' and '\\n'. If this option is specified, input file list is treated as already escaped. The sequences are unescaped first before queuing for prefetch operation.Note: This option can be used only if you are specifyinglist-file
orhome-list-file
. - --directory LocalDirectoryPath
- Specifies path to the local directory from which you want to prefetch files. A list of all files in this directory and all its sub-directories is generated, and queued for prefetch.
--skip-dir-list-file SkipDirListfile
Specifies a path to a file, which contains a unique entry of directories in an AFM fileset, that needs to be skipped during the prefetch. This option can be used only with the --directory option. When this option is enabled, specified directories in an AFM fileset are not prefetched.
- --dir-list-file DirListfile
- Specifies path to the file which contains unique entry of directories under the AFM fileset
which needs to be prefetched. This option enables to prefetch individual directories under an AFM
fileset. AFM generate a list of all files and sub-directories inside and queued for prefetch. The
input file could also be a policy generated file for which user needs to specify
--policy.
You can either specify --directory or --dir-list-file option with mmafmctl prefetch.
The --policy option can be used only with --dir-list-file and not with --directory.
For example,
mmafmctl fs1 prefetch -j fileset1 --dir-list-file /tmp/file1 --policy --nosubdirs
- --nosubdirs
- This option restricts the recursive behavior of --dir-list-file and
--directory and prefetch only until the given level of directory. This
option will not prefetch the sub-directories under the given directory. This is optional
parameter.
This option can only be used with --dir-list-file and --directory.
For example,
#mmafmctl fs1 prefetch -j fileset1 --directory /gpfs/fs1/fileset1/dir1 --nosubdirs #mmafmctl fs1 prefetch -j fileset1 --dir-list-file /tmp/file1 --policy --nosubdirs
- --list-file ListFile
- The specified file is a file containing a list of files, and needs to be prefetched, one file
per line. All files must have fully qualified path names.
If the list of files to be prefetched have filenames with special characters, then a policy must be used to generate the listfile. Remove entries from the file other than the filenames.
An indicative list of files:- files with fully qualified names from cache
- files with fully qualified names from home or cloud object storage
- list of files from home or cloud object storage generated by using policy. Do not edit.
- --home-list-file HomeListFile
- Contains a list of files from the home or cloud object storage cluster that
needs to be prefetched, one file per line. All files must have fully qualified path names. If the
list of files has filenames with special characters, use a policy to generate the
listfile
. Edit to remove all entries other than thefilenames
. - --home-inode-file PolicyListFile
- Contains a list of files from the home or cloud object storage cluster that needs to be prefetched in the cache. Do not edit the file. The file is generated by using policy.
--outband
Pre-populates metadata in an AFM IW, SW, LU, and RO mode fileset locally by using a given changed-list-file. This option can be used only with the --list-file option. The given list-file contains the metadata of files or directories, which was generated from the home/source, that is, the target path of the AFM fileset.
- --home-fs-path HomeFileSystemPath
-
Specifies the full path to the fileset at the home or cloud object storage cluster and can be used along with ListFile.
You must use this option, when in the NSD protocol the mount point on the gateway nodes of the afmTarget filesets does not match the mount point on the home or cloud object storagecluster.
For example, mmafmctl gpfs1 prefetch -j cache1 –list-file /tmp/list.allfiles --home-fs-path /gpfs/remotefs1
In this example, the file system is mounted on the :- home cluster at /gpfs/homefs1
- gateway nodes at /gpfs/remotefs1
--delete
Deletes data from an AFM fileset as provided by the deleted-list-file. This option does not queue any data on a gateway node and synchronizes the AFM fileset data with deleted data on a source by using the --list-file option. The mmmigrate.deleted.files, which is generated on the home/source path, contains a list of deleted files on the home/source path. The prefetch with the --delete option runs a delete operation on the cache to synchronize the deleted data from the home.
- --readdir-only
- This option overrides the dirty flag that is set when the data is modified at the local LU
cache. In the LU mode, the dirty flag does not allow the
readdir
operation at the home or cloud object storage and refreshes the directory file entries from the home or cloud object storage. - --force
- Enables forcefully fetching data from the home or cloud object storage during
the migration process. This option overrides any set restrictions and helps to fetch the data
forcefully to the cache. This option must be used only to forcefully fetch the data that was created
after the migration process completion.For example,
# mmafmctl <fs> prefetch -j <fileset> --list-file <listfile_path> --force
- --gateway Node
- Allows selecting the gateway node that can be used to run the prefetch operation on a fileset,
which is idle or less used. This parameter helps to distribute the prefetch work on different
gateway nodes and overrides the default gateway node, which is assigned to the fileset. This
parameter also helps to run different prefetch operations on different gateway nodes, which might
belong to the same fileset or a different fileset.For example,
# mmafmctl <fs> prefetch -j <fileset> --list-file <listfile_path> --gateway Node2
To check the prefetch statistics of this command on gateway Node2, issue the following command:# mmafmctl <fs> prefetch -j <fileset> --gateway Node2
--empty-ptrash
Cleans the .ptrash directory in an AFM fileset. This option can be specified with all prefetch options to delete all existing data in the .ptrash directory.
- --prefetch-threads nThreads
- Specifies the number of threads to be used for the prefetch operation. Valid values are 1 - 255.
Default value is 4.For example,
# mmafmctl <fs> prefetch -j <fileset> --list-file <listfile_path> --prefetch-threads 6
If you run prefetch without providing any options, it displays statistics of the last prefetch command that is run on the fileset.
If you run the prefetch command with data or metadata options, statistics like queued files, total files, failed files, total data (in Bytes) is displayed as in the following example of command and system output:
#mmafmctl <FileSystem> prefetch -j <fileset> --enable-failed-file-list --list-file /tmp/file-listmmafmctl: Performing prefetching of fileset: <fileset> Queued (Total) Failed TotalData (approx in Bytes) 0 (56324) 0 0 5 (56324) 2 1353559 56322 (56324) 2 14119335
This section describes:
mmafmctl Device getOutbandList -j FilesetName --path Path
- getOutbandList
- Generates list-files of files or directories and their metadata, which exists on the specified path. The generated files can be used to pre-populate an AFM fileset on the cache site by using the prefetch --outband option.
- --path <Path>
- Specifies a path in the source or home GPFS file system path or GFPS fileset path from where the files/directories metadata needs to be collected. Metadata is collected recursively by running policy on the specified path.
-
This section describes:
mmafmctl Device evict -j FilesetName [--safe-limit SafeLimit] [--order {LRU | SIZE}] [--log-file LogFile] [--filter Attribute=Value ...] [--list-file ListFile] [--file FilePath]
This option is applicable for RO/SW/IW/LU filesets. When cache space exceeds the allocated quota, data blocks from non-dirty are automatically de-allocated with the eviction process. This option can be used for a file that is specifically to be de-allocated based on some criteria. All options can be combined with each other.
- --safe-limit SafeLimit
- This is a compulsory parameter for the manual evict option, for order and filter attributes. Specifies target quota limit (which is used as the low water mark) for eviction in bytes; must be less than the soft limit. This parameter can be used alone or can be combined with one of the following parameters (order or filter attributes). Specify the parameter in bytes.
- --order LRU | SIZE
- Specifies the order in which files are to be chosen for eviction:
- LRU
- Least recently used files are to be evicted first.
- SIZE
- Larger-sized files are to be evicted first.
- --log-file LogFile
- Specifies the file where the eviction log is to be stored. The default is that no logs are generated.
- --filter Attribute=Value
- Specifies attributes that enable you to control how data is evicted from the cache. Valid
attributes are:
- FILENAME=FileName
- Specifies the name of a file to be evicted from the cache. This uses an SQL-type search query. If the same file name exists in more than one directory, it will evict all the files with that name. The complete path to the file should not be given here.
- MINFILESIZE=Size
- Sets the minimum size of a file to evict from the cache. This value is compared to the number of blocks allocated to a file (KB_ALLOCATED), which may differ slightly from the file size.
- MAXFILESIZE=Size
- Sets the maximum size of a file to evict from the cache. This value is compared to the number of blocks allocated to a file (KB_ALLOCATED), which may differ slightly from the file size.
- --list-file ListFile
- Contains a list of files that you want to evict, one file per line. All files must have fully
qualified path names. File system quotas need not be specified. If the list of files has file names
with special characters, use a policy to generate the
listfile
. Edit to remove all entries other than the file names. - --file FilePath
- The fully qualified name of the file that needs to be evicted. File system quotas need not be specified.
Possible combinations of safelimit, order, and filter are:- only Safe limit
- Safe limit + LRU
- Safe limit + SIZE
- Safe limit + FILENAME
- Safe limit + MINFILESIZE
- Safe limit + MAXFILESIZE
- Safe limit + LRU + FILENAME
- Safe limit + LRU + MINFILESIZE
- Safe limit + LRU + MAXFILESIZE
- Safe limit + SIZE + FILENAME
- Safe limit + SIZE + MINFILESIZE
- Safe limit + SIZE + MAXFILESIZE
-
This section describes:
mmafmctl Device failback -j FilesetName {{--start --failover-time Time} | --stop}
[-s LocalWorkDirectory]
failback is applicable only for IW filesets.
- failback --start --failover-time Time
- Specifies the point in time at the home or cloud object storage cluster, from which the independent-writer cache taking over as writer should sync up. The failover Time can be specified in date command format with time zones. It will use the cluster's time-zone and year by default.
- failback --stop
- An option to be run after the failback process is complete and the fileset moves to FailbackCompleted state. This option will move the fileset to Active state.
-
This section describes:
mmafmctl Device failoverToSecondary -j FilesetName [--norestore |--restore ]
This is to be run on a secondary fileset.
When primary experiences a disaster, all applications will need to be moved to the secondary to ensure business continuity. The secondary must be first converted to an acting primary using this option.
There is a choice of restoring the latest snapshot data on the secondary during the failover process or leave the data as is using the --norestore option. Once this is complete, the secondary becomes ready to host applications.
- --norestore
- Specifies that restoring from the latest RPO snapshot is not required. This is the default setting.
- --restore
- Specifies that data must be restored from the latest RPO snapshot.
-
This section describes:
mmafmctl Device convertToPrimary -j FilesetName [ --afmtarget Target { --inband | --secondary-snapname SnapshotName }] [ --check-metadata | --nocheck-metadata ] [--rpo RPO] [-s LocalWorkDirectory]
This is to be run on a GPFS fileset or SW/IW fileset which is intended to be converted to primary.
- --afmtargetTarget
- Specifies the secondary that needs to be configured for this primary. Need not be used for AFM filesets as target would already have been defined.
- --inband
- Used for inband trucking. Inband trucking copies data from the primary site to an empty secondary site during conversion of GPFS filesets to AFM DR primary filesets. If you have already copied data to the secondary site, AFM checks mtime of files at the primary and secondary site. Here, granularity of mtime is in microseconds. If mtime values of both files match, data is not copied again and existing data on the secondary site is used. If mtime values of both files do not match, existing data on the secondary site is discarded and data from the primary site is written to the secondary site.
- --check-metadata
- This is the default option. Checks if the disallowed types (like immutable/append-only files) are present in the GPFS fileset on the primary site before the conversion. Conversion with this option fails if such files exist.
- --nocheck-metadata
- Used if one needs to proceed with conversion without checking for appendonly/immutable files.
- --secondary-snapname SnapshotName
- Used while establishing a new primary for an existing secondary or acting primary during failback.
- --rpo RPO
- Specifies the RPO interval in minutes for this primary fileset. Disabled by default.
-
This section describes:
mmafmctl Device convertToSecondary -j FilesetName --primaryid PrimaryId [ --force ]
This is to be run on a GPFS fileset on the secondary site. This converts a GPFS independent fileset to a secondary and sets the primary ID.
- --primaryid PrimaryId
- Specifies
the unique identifier of the AFM-DR primary fileset which needs to be set at AFM-DR Secondary
fileset to initiate a relationship. You can obtain this fileset identifier by running the
mmlsfileset command using the --afm and
-L options.For example,
#mmlsfileset <FileSystem> <AFM DR Fileset> -L --afm |grep 'Primary Id'
- --force
- If convertToSecondary failed or got interrupted, it will not create afmctl file at the secondary. The command should be rerun with the --force option.
-
This section describes:
mmafmctl Device changeSecondary -j FilesetName --new-target NewAfmTarget [ --target-only |--inband ] [-s LocalWorkDirectory]
This is to be run on a primary fileset only.
A disaster at the secondary can take place due to which secondary is not available.
Run this command on the primary when a secondary fails and this primary needs to be connected with a new secondary. On the new secondary site, create a new GPFS independent fileset. Data on the primary can be copied to the new GPFS fileset that was created with this command using other means such as ftp, scp. Alternatively, it can be decided that data will be trucked using the relationship.
For changing the target to IPv6 from IPv4 address for ADR filesets, use changeSecondary option with --target-only parameter.
- --new-target NewAfmTarget
- Used to mention the new secondary.
- --inband
- Used for inband trucking. Copies data to a new empty secondary. If you have already copied data to the secondary site, mtime of files at the primary and secondary site is checked. Here, granularity of mtime is in microseconds. If mtime values of both files match, data is not copied again and existing data on the secondary site is used. If mtime values of both files do not match, existing data on the secondary site is discarded and data from the primary site is written to the secondary site.
- --target-only
-
Used when you want to change the IP address or NFS server name for the same target path. The new NFS server must be in the same home or cloud object storage cluster and must be of the same architecture(power or x86) as the existing NFS server in the target path. This option can be used to move from NFS to a mapping target.
-
This section describes:
mmafmctl Device replacePrimary -j FilesetName
This is used on an acting primary only. This creates a latest snapshot of the acting primary. This command deletes any old RPO snapshots on the acting primary and creates a new initial RPO snapshot
psnap0
.This RPO snapshot is used in the setup of the new primary.
-
This section describes:
mmafmctl Device failbackToPrimary -j FilesetName {--start | --stop} [--force]
This is to be run on an old primary that came back after the disaster, or on a new primary that is to be configured after an old primary went down with a disaster. The new primary should have been converted from GPFS to primary using convertToPrimary option.
- --start
- Restores the primary to the contents from the last RPO on the primary before the disaster. This option will put the primary in read-only mode to avoid accidental corruption until the failback process is completed. In case of new primary that is setup using convertToPrimary, the failback --start does not change.
- --stop
- Used to complete the Failback process. This will put the fileset in read-write mode. The primary is now ready for starting applications.
- --force
- Used if --stop or --start does not complete
successfully due to any errors, and does not allow
failbackToPrimary
to stop or start again.
-
This section describes:
mmafmctl Device {applyUpdates |getPrimaryId } -j FilesetName
Both options are intended for the primary fileset.
- applyUpdates
- Run this on the primary after running the failback --start command.
All the differences can be brought over in one go or through multiple iterations. For minimizing
application downtime, this command can be run multiple times to bring the primary's contents in sync
with the acting primary. When the contents are as close as possible or minimal, applications should
take a downtime and then this command should be run one last time.
It is possible that applyUpdates fails with an error during instances when the acting primary is overloaded. In such cases, the command must be run again.
- getPrimaryID
- Used to get primary ID of a primary fileset.
-
This section describes:
mmafmctl Device {setlocal | resetlocal} -j FilesetName [--path FilePath]
- setlocal
- The
setlocal
option can be set on file or directory that belongs to the AFM to cloud object storage fileset only. Thesetlocal
option accepts the path of the file or directory and can be set individually. After thesetlocal
option is set on a file, this file remains as a local and any new updates on this file do not get synchronized to the target bucket. Any modification to this file is updated on local cluster only. AFM does not queue any modification to the bucket. - resetlocal
- This
resetlocal
option cleans the local flag which is already marked on file or directory that belongs to AFM to cloud object storage fileset by using mmafmctl setlocal command. Theresetlocal
option accepts the path of the file or directory which was marked as local already. After theresetlocal
is issued, this clears the local flag on the file. Any modification on this file is now synchronized to the bucket.#mmafmctl fs resetlocal -j AFMtoCOS --path /ibm0/fs/AFMtoCOS/file1
Exit status
- 0
- Successful completion.
- nonzero
- A failure occurred.
Security
You must have root authority to run the mmafmctl command.
The node on which the command is issued must be able to run remote shell commands on any other node in the cluster without the use of a password and without producing any extraneous messages. For more information, see Requirements for administering a GPFS file system.
Examples
- Running resync on SW:
# mmafmctl fs1 resync -j sw1 mmafmctl: Performing resync of fileset: sw1 # mmafmctl fs1 getstate -j sw1 Fileset Name Fileset Target Cache State Gateway Node Queue Length Queue numExec ------------ -------------- ----------- ------------ ------------ ------------- sw1 nfs://c26c3apv2/gpfs/homefs1/newdir1 Dirty c26c2apv1 4067 10844
- Expiring an RO fileset:
# mmafmctl fs1 expire -j ro1 # mmafmctl fs1 getstate -j ro1 Fileset Name Fileset Target Cache State Gateway Node Queue Length Queue numExec ------------ -------------- ----------- ------------ ------------ ------------- ro1 gpfs:///gpfs/remotefs1/dir1 Expired c26c4apv1 0 4
- Unexpiring an RO fileset:
# mmafmctl fs1 unexpire -j ro1 # mmafmctl fs1 getstate -j ro1 Fileset Name Fileset Target Cache State Gateway Node Queue Length Queue numExec ------------ -------------- ----------- ------------ ------------ ------------- ro1 gpfs:///gpfs/remotefs1/dir1 Active c26c4apv1 0 4
- Check the status of an SW
fileset.
A sample output is as follows:# mmafmctl fs1 getstate -j filesetSW1 --write-stats
Fileset Name Total Written Data N/w Throughput Total Pending Data Estimated (Bytes) (KB/s) to Write(Bytes) Completion time ------------ ------------------- -------------- ------------------- ---------------- filesetSW1 98359590 68 22620600 5 (Min)
- Check the status of an RO
fileset.
A sample output is as follows:# mmafmctl fs1 getstate -j filesetro1 --read-stats
Fileset Name Total Read Data N/w Throughput Total Pending Data Estimated (Bytes) (KB/s) to Read (Bytes) Completion time ------------ ----------------- -------------- ------------------- ---------------- fileset-ro1 8285370854 16732 31457280 10 (Min)
# mmafmctl fs1 getstate -j s3ro1
Fileset Name Fileset Target Cache State Gateway Node Queue Length Queue numExec ------------ -------------- ------------- ------------ ------------ ------------- s3ro1 Active c80f1m5n12 0 7c80f1m5n12 240405-01:29:00 0] dd if=/gpfs/fs1/s3ro1/file2 bs=4194304 count=1 of=/dev/null1+0 records in1+0 records out4194304 bytes (4.2 MB, 4.0 MiB) copied, 2.17532 s, 8.0 MB/sc80f1m5n12 240405-01:29:27 0] ls -altrish /gpfs/fs1/s3ro1/file25801985 16M -rwxrwx---. 1 root root 400M Apr 5 01:19 /gpfs/fs1/s3ro1/file2
- Generate two files that contain a list of uncached files and uncached
directories.
A sample output is as follows:# mmafmctl fs checkUncached -j fileset
Verifying if all the data is cached. This may take a while... mmchfileset: [E] Uncached files present, run prefetch first Directories list file: /var/mmfs/tmp/cmdTmpDir.mmchfileset.18566/dir-file.mmchfileset.18566 Files list file: /var/mmfs/tmp/cmdTmpDir.mmchfileset.18566/list-file.mmchfileset.18566
- Generate two separate files that contain a list of dirty files and
dirty
directories.
A sample output is as follows:# mmafmctl fs checkDirty -j fileset
Dirty file list: /var/mmfs/tmp/list.dirtyFiles.26525 Dirty directories list: /var/mmfs/tmp/list.dirtyDirs.26525
- Run flushPending on SW fileset:
// Populate the fileset with data # mmafmctl fs1 getstate -j sw1 Fileset Name Fileset Target Cache State Gateway Node Queue Length Queue numExec ------------ -------------- ----------- ------------ ------------ ------------- sw1 gpfs:///gpfs/remotefs1/dir1 Dirty c26c2apv1 5671 293 Get the list of files newly created using policy: RULE EXTERNAL LIST 'L' RULE 'List' LIST 'L' WHERE PATH_NAME LIKE '%' # mmapplypolicy /gpfs/fs1/sw1/migrateDir.popFSDir.22655 -P p1 -f p1.res -L 1 -N mount -I defer Policy created this file, this should be hand-edited to retain only the names: 11012030 65537 0 -- /gpfs/fs1/sw1/migrateDir.popFSDir.22655/file_with_posix_acl1 11012032 65537 0 -- /gpfs/fs1/sw1/migrateDir.popFSDir.22655/populateFS.log 11012033 65537 0 -- /gpfs/fs1/sw1/migrateDir.popFSDir.22655/sparse_file_0_with_0_levels_indirection # cat p1.res.list | awk '{print $5}' > /lfile # mmafmctl fs1 flushPending -j sw1 --list-file=/lfile
- Failover of SW to a new home:
At home -# mmafmctl fs1 getstate -j sw1 Fileset Name Fileset Target Cache State Gateway Node Queue Length Queue numExec ------------ -------------- ----------- ------------ ------------ ------------- sw1 gpfs:///gpfs/remotefs1/dir1 Dirty c26c2apv1 785 5179
At cache -# mmcrfileset homefs1 newdir1 --inode-space=new Fileset newdir1 created with id 219 root inode 52953091. # mmlinkfileset homefs1 newdir1 -J /gpfs/homefs1/newdir1 Fileset newdir1 linked at /gpfs/homefs1/newdir1 # mmafmconfig enable /gpfs/homefs1/newdir1
# mmafmctl fs1 failover -j sw1 --new-target=c26c3apv1:/gpfs/homefs1/newdir1 mmafmctl: Performing failover to nfs://c26c3apv1/gpfs/homefs1/newdir1 Fileset sw1 changed. mmafmctl: Failover in progress. This may take while... Check fileset state or register for callback to know the completion status. Callback registered, logged into mmfs.log: Thu May 21 03:06:18.303 2015: [I] Calling User Exit Script callback7: event afmManualResyncComplete, Async command recovery.sh # mmafmctl fs1 getstate -j sw1 Fileset Name Fileset Target Cache State Gateway Node Queue Length Queue numExec ------------ -------------- ----------- ------------ ------------ ------------- sw1 nfs://c26c3apv1/gpfs/homefs1/newdir1 Active c26c2apv1 0 5250
- Changing target of SW fileset:
Changing to another NFS server in the same home cluster using --target-only option: # mmafmctl fs1 failover -j sw1 --new-target=c26c3apv2:/gpfs/homefs1/newdir1 --target-only mmafmctl: Performing failover to nfs://c26c3apv2/gpfs/homefs1/newdir1 Fileset sw1 changed. # mmafmctl fs1 getstate -j sw1 Fileset Name Fileset Target Cache State Gateway Node Queue Length Queue numExec ------------ -------------- ----------- ------------ ------------ ------------- sw1 nfs://c26c3apv2/gpfs/homefs1/newdir1 Active c26c2apv1 0 5005
- Metadata population using prefetch:
# mmafmctl fs1 getstate -j ro Fileset Name Fileset Target Cache State Gateway Node Queue Length Queue numExec ------------ -------------- ----------- ------------ ------------ ------------- ro nfs://c26c3apv1/gpfs/homefs1/dir3 Active c26c2apv2 0 7 List Policy: RULE EXTERNAL LIST 'List' RULE 'List' LIST 'List' WHERE PATH_NAME LIKE '%' Run the policy at home: mmapplypolicy /gpfs/homefs1/dir3 -P px -f px.res -L 1 -N mount -I defer Policy created this file, this should be hand-edited to retain only file names. This file can be used at the cache to populate metadata.
# mmafmctl fs1 prefetch -j ro --metadata-only -list-file=px.res.list.List mmafmctl: Performing prefetching of fileset: ro Queued (Total) Failed TotalData (approx in Bytes) 0 (2) 0 0 100 (116) 5 1368093971 116 (116) 5 1368093971 prefetch successfully queued at the gateway
Prefetch end can be monitored using this event: Thu May 21 06:49:34.748 2015: [I] Calling User Exit Script prepop: event afmPrepopEnd, Async command prepop.sh. The statistics of the last prefetch command is viewed by:
# mmafmctl fs1 prefetch -j ro Fileset Name Async Read (Pending) Async Read (Failed) Async Read (Already Cached) Async Read (Total) Async Read (Data in Bytes) ------------ -------------------- ------------------ --------------------------- ------------------ -------------------------- ro 0 1 0 7 0
- Prefetch of data using --home-list-file option:
# cat /lfile1 /gpfs/homefs1/dir3/file1 /gpfs/homefs1/dir3/dir1/file1
# mmafmctl fs1 prefetch -j ro --home-list-file=/lfile1 mmafmctl: Performing prefetching of fileset: ro Queued (Total) Failed TotalData (approx in Bytes) 0 (2) 0 0 100 (116) 5 1368093971 116 (116) 5 1368093971 prefetch successfully queued at the gateway
# mmafmctl fs1 prefetch -j ro Fileset Name Async Read (Pending) Async Read (Failed) Async Read (Already Cached) Async Read (Total) Async Read (Data in Bytes) ------------ -------------------- ------------------ --------------------------- ------------------ -------------------------- ro 0 0 0 2 122880
- Prefetch of data using --home-inode-file option:
Inode file is created using the above policy at home, and should be used as such without hand-editing. List Policy: RULE EXTERNAL LIST 'List' RULE 'List' LIST 'List' WHERE PATH_NAME LIKE '%' Run the policy at home: # mmapplypolicy /gpfs/homefs1/dir3 -P px -f px.res -L 1 -N mount -I defer # cat /lfile2 113289 65538 0 -- /gpfs/homefs1/dir3/file2 113292 65538 0 -- /gpfs/homefs1/dir3/dir1/file2
# mmafmctl fs1 prefetch -j ro2 --home-inode-file=/lfile2 mmafmctl: Performing prefetching of fileset: ro2 Queued (Total) Failed TotalData (approx in Bytes) 0 (2) 0 0 2 (2) 0 122880 prefetch successfully queued at the gateway
mmafmctl fs1 prefetch -j ro Fileset Name Async Read (Pending) Async Read (Failed) Async Read (Already Cached) Async Read (Total) Async Read (Data in Bytes) ------------ -------------------- ------------------ --------------------------- ------------------ -------------------------- ro 0 0 2 2 0
- Using --home-fs-path option for a target with NSD protocol:
# mmafmctl fs1 getstate -j ro2 Fileset Name Fileset Target Cache State Gateway Node Queue Length Queue numExec ------------ -------------- ----------- ------------ ------------ ------------- ro2 gpfs:///gpfs/remotefs1/dir3 Active c26c4apv1 0 7 # cat /lfile2 113289 65538 0 -- /gpfs/homefs1/dir3/file2 113292 65538 0 -- /gpfs/homefs1/dir3/dir1/file2
# mmafmctl fs1 prefetch -j ro2 --home-inode-file=/lfile2 --home-fs-path=/gpfs/homefs1/dir3 mmafmctl: Performing prefetching of fileset: ro2 Queued (Total) Failed TotalData approx in Bytes) 0 (2) 0 0 2 (2) 0 122880 prefetch successfully queued at the gateway
# mmafmctl fs1 prefetch -j ro2 Fileset Name Async Read (Pending) Async Read (Failed) Async Read (Already Cached) Async Read (Total) Async Read (Data in Bytes) ------------ -------------------- ------------------ --------------------------- ------------------ -------------------------- ro2 0 0 0 2 122880
- Manually evicting using safe-limit and filename parameters:
# ls -lis /gpfs/fs1/ro2/file10M_1 12605961 10240 -rw-r--r-- 1 root root 10485760 May 21 07:44 /gpfs/fs1/ro2/file10M_1 # mmafmctl fs1 evict -j ro2 --safe-limit=1 --filter FILENAME=file10M_1 # ls -lis /gpfs/fs1/ro2/file10M_1 12605961 0 -rw-r--r-- 1 root root 10485760 May 21 07:44 /gpfs/fs1/ro2/file10M_1
- IW Failback:
# mmafmctl fs1 getstate -j iw1 Fileset Name Fileset Target Cache State Gateway Node Queue Length Queue numExec ------------ -------------- ----------- ------------ ------------ ------------- iw1 nfs://c26c3apv1/gpfs/homefs1/dir3 Active c25m4n03 0 8 # touch file3 file4 # mmafmctl fs1 getstate -j iw1 Fileset Name Fileset Target Cache State Gateway Node Queue Length Queue numExec ------------ -------------- ----------- ------------ ------------ ------------- iw1 nfs://c26c3apv1/gpfs/homefs1/dir3 Dirty c25m4n03 2 11 Unlink IW fileset feigning failure: # mmunlinkfileset fs1 iw1 -f Fileset iw1 unlinked. Write from IW home, assuming applications failed over to home: Thu May 21 08:20:41 4]dir3# touch file5 file6 Relink IW back on the cache cluster, assuming it came back up: # mmlinkfileset fs1 iw1 -J /gpfs/fs1/iw1 Fileset iw1 linked at /gpfs/fs1/iw1 Run failback on IW: # mmafmctl fs1 failback -j iw1 --start --failover-time='May 21 08:20:41' # mmafmctl fs1 getstate -j iw1 Fileset Name Fileset Target Cache State Gateway Node Queue Length Queue numExec ------------ -------------- ----------- ------------ ------------ ------------- iw1 nfs://c26c3apv1/gpfs/homefs1/dir3 FailbackInProg c25m4n03 0 0 # mmafmctl fs1 failback -j iw1 -stop # mmafmctl fs1 getstate -j iw1 Fileset Name Fileset Target Cache State Gateway Node Queue Length Queue numExec ------------ -------------- ----------- ------------ ------------ ------------- iw1 nfs://c26c3apv1/gpfs/homefs1/dir3 Active c25m4n03 0 3
- Manual evict using the --list-file
option:
# ls -lshi /gpfs/fs1/evictCache total 6.0M 27858308 1.0M -rw-r--r--. 1 root root 1.0M Feb 5 02:07 file1M 27858307 2.0M -rw-r--r--. 1 root root 2.0M Feb 5 02:07 file2M 27858306 3.0M -rw-r--r--. 1 root root 3.0M Feb 5 02:07 file3M # echo "RULE EXTERNAL LIST 'HomePREPDAEMON' RULE 'ListLargeFiles' LIST 'HomePREPDAEMON' WHERE PATH_NAME LIKE '%'" > /tmp/evictionPolicy.pol # mmapplypolicy /gpfs/fs1/evictCache -I defer -P /tmp/evictionPolicy.pol -f /tmp/evictionList #Edited list of files to be evicted [root@c21f2n08 ~]# cat /tmp/evictionList.list.HomePREPDAEMON 27858306 605742886 0 -- /gpfs/fs1/evictCache/file3M # mmafmctl fs1 evict -j evictCache --list-file /tmp/evictionList.list.HomePREPDAEMON Evicted (Total) Failed 1 (1) 0 # ls -lshi /gpfs/fs1/evictCache total 3.0M 27858308 1.0M -rw-r--r--. 1 root root 1.0M Feb 5 02:07 file1M 27858307 2.0M -rw-r--r--. 1 root root 2.0M Feb 5 02:07 file2M 27858306 0 -rw-r--r--. 1 root root 3.0M Feb 5 02:07 file3M
- Manual evict using the --file
option:
# ls -lshi /gpfs/fs1/evictCache total 3.0M 27858308 1.0M -rw-r--r--. 1 root root 1.0M Feb 5 02:07 file1M 27858307 2.0M -rw-r--r--. 1 root root 2.0M Feb 5 02:07 file2M 27858306 0 -rw-r--r--. 1 root root 3.0M Feb 5 02:07 file3M # mmafmctl fs1 evict -j evictCache --file /gpfs/fs1/evictCache/file1M # ls -lshi /gpfs/fs1/evictCache total 0 27858308 0 -rw-r--r--. 1 root root 1.0M Feb 5 02:07 file1M 27858307 0 -rw-r--r--. 1 root root 2.0M Feb 5 02:07 file2M 27858306 0 -rw-r--r--. 1 root root 3.0M Feb 5 02:07 file3M
- mmafmctl fs1 getstate showing target IPv6
address:
mmafmctl fs1 getstate Fileset Name Fileset Target Cache State Gateway Node Queue Length Queue numExec ------------ -------------- ------------- ------------ ------------ ------------- obj1 https://s3.amazonaws.com:443/mufileset Active c7f2n04 0 143119 ip1 nfs://[2001:192::210:18ff:fec6:ecd8]/fileset Active c7f2n03 0 3 ip2 nfs://[2001:192::210:18ff:fec6:ecd8]/fileset1 Active c7f2n03 0 524752
See also
Location
/usr/lpp/mmfs/bin