mmafmctl command

This command is for various operations and reporting information on all filesets. It is recommended to read the IBM Spectrum Scale: Administration Guide AFM and AFM Disaster Recovery chapters in conjunction with this manual for detailed description of the functions.

Synopsis

To use the AFM DR functions correctly, use all commands enlisted in this chapter in accordance with the steps described in the AFM-based DR chapter in IBM Spectrum Scale: Concepts, Planning, and Installation Guide.

AFM read only mode is referred as RO, single writer mode is referred as SW, independent writer mode is referred as IW and local update mode is referred as LU in this manual.

mmafmctl Device {resync | expire | unexpire |stop |start} -j FilesetName 

or

mmafmctl Device {getstate|resumeRequeued} Start of change-j FilesetNameEnd of change

or

mmafmctl Device flushPending [-j FilesetName [--list-file ListFile]]
             [-s LocalWorkDirectory]  

or

mmafmctl Device failover -j FilesetName
             --new-target NewAfmTarget [--target-only] [-s LocalWorkDirectory]  

or

mmafmctl Device prefetch -j FilesetName [-s LocalWorkDirectory]
             [--retry-failed-file-list|--enable-failed-file-list] 
             [{--directory LocalDirectoryPath | --dir-list-file DirListFile [--policy]}[--nosubdirs]] 
             [{--list-file ListFile|--home-list-file HomeListFile}[--policy]]
             [--home-inode-file PolicyListFile]   
             [--home-fs-path HomeFileSystemPath]
             [--metadata-only][--gateway GatewayNode]

or

mmafmctl Device failback -j FilesetName {{--start --failover-time Time} | --stop}
        [-sLocalWorkDirectory]

or

mmafmctl Device failoverToSecondary -j FilesetName [--norestore |--restore ]

or


mmafmctl Device convertToPrimary -j FilesetName
         [ --afmtarget Target { --inband | --secondary-snapname SnapshotName }]
         [ --check-metadata | --nocheck-metadata ] [--rpo RPO] [-s LocalWorkDirectory]

or

mmafmctl Device convertToSecondary -j FilesetName --primaryid  PrimaryId [ --force ]

or


mmafmctl Device changeSecondary -j FilesetName 
--new-target NewAfmTarget [ --target-only |--inband ] 
         [-s LocalWorkDirectory]

or

mmafmctl Device replacePrimary -j FilesetName

or

mmafmctl Device failbackToPrimary -j FilesetName {--start | --stop }[ --force ]

or

mmafmctl Device {applyUpdates |getPrimaryId } -j FilesetName 

Availability

Available on all IBM Spectrum Scale editions. Available on AIX® and Linux.

Description

The usage of options of this command for different operations on both AFM (RO/SW/IW/LU) filesets and AFM primary/secondary filesets are explained with examples.

File system should be mounted on all gateway nodes for mmafmctl functions to work.

Parameters

Device
Specifies the device name of the file system.
-j FilesetName
Specifies the fileset name.
-Y
Displays the command output in a parseable format with a colon (:) as a field delimiter. Each column is described by a header.
Note: Fields that have a colon (:) are encoded to prevent confusion. For the set of characters that might be encoded, see the command documentation of mmclidecode. Use the mmclidecode command to decode the field.
-s LocalWorkDirectory
Specifies the temporary working directory.
1. This section describes:
mmafmctl Device {resync | expire | unexpire | stop|start} -j FilesetName 
resync
This option is available only for SW cache. In case of inadvertent changes made at home of an SW fileset, such as delete of a file or change of data in a file etc., the administrator can correct the home by sending all contents from cache to home using this option. The limitation of this option that renamed files at home may not be fixed by resync. Using resync requires the cache to be either in NeedsResync or Active state.
expire | unexpire
This option is available only for RO cache Start of changeto manually expire or unexpireEnd of change. When an RO cache is disconnected, the cached contents are still accessible for the user. However, the administrator can define a Start of changetimeoutEnd of change from home beyond which access to the cached contents becomes stale. Such an event would occur automatically after disconnection (when cached contents are no longer accessible) and is called expiration; the cache is said to be expired. Start of changeThis option is used to manually force the cache state to 'Expired'. To expire a fileset manually, the afmExpirationTimeout must be set on the fileset.End of change

When the home comes back or reconnects, the cache contents become automatically accessible again and the cache is said to un-expire. Start of changeThe unexpire option is used to force cache to come out of the 'Expired' state.End of change

The manual expiration and un-expiration can be forced on a cache even when the home is in a connected state. Start of changeIf a cache is expired manually, the same cache must be unexpired manually. End of change

stop
Run on an AFM or AFM DR fileset to stop replication. You can use this command during maintenance or downtime, when the I/O activity on the filesets is stopped, or is minimal. After the fileset moves to a 'Stopped' state, changes or modifications to the fileset are not sent to the gateway node for queuing.
start
Run on a 'Stopped' AFM or AFM DR fileset to start sending updates to the gateway node and resume replication on the fileset.
2. This section describes:
mmafmctl Device {getstate | resumeRequeued} Start of change-j FilesetNameEnd of change  
getstate
This option is applicable for all AFM (RO/SW/IW/LU) and AFM primary filesets. It displays the status of the fileset in the following fields:
Fileset Name
The name of the fileset.
Fileset Target
The host server and the exported path on it.
Gateway Node
Primary gateway of the fileset. This gateway node is handling requests for this fileset.
Queue Length
Current length of the queue on the primary gateway.
Queue numExec
Number of operations played at home since the fileset is last Active.
Cache State
  • Cache states applicable for all AFM RO/SW/IW/LU filesets:

    Active, Inactive, Dirty, Disconnected, Stopped, Unmounted

  • Cache states applicable for RO filesets:

    Expired

  • Cache states applicable for SW and IW filesets:

    Recovery, FlushOnly, QueueOnly, Dropped, NeedsResync, FailoverInProgress

  • Cache states applicable for IW filesets:

    FailbackInProgress, FailbackCompleted, NeedsFailback

  • Cache states applicable for AFM primary filesets:

    PrimInitInProg, PrimInitFail, Active, Inactive, Dirty, Disconnected, Unmounted, FailbackInProg, Recovery, FlushOnly, QueueOnly, Dropped, Stopped, NeedsResync

For more information on all cache states, see the AFM and AFM-based DR chapters in the IBM Spectrum Scale: Concepts, Planning, and Installation Guide.

resumeRequeued
This option is applicable for SW/IW and primary filesets. If there are operations in the queue that were re-queued due to errors at home, the Administrator should correct those errors and can run this option to retry the re-queued operations.
3. This section describes:
mmafmctl Device flushPending [-j FilesetName [--list-file ListFile]]
             [-s LocalWorkDirectory]  
flushPending
Flushes all point-in-time pending messages in the normal queue on the fileset to home. Requeued messages and messages in the priority queue for the fileset are not flushed by this command.

When --list-file ListFile is specified, the messages pending on the files listed in the list file are flushed to home. ListFile contains a list of files that you want to flush, one file per line. All files must have absolute path names, specified from the fileset linked path. If the list of files has filenames with special characters, use a policy to generate the listfile. Edit to remove all entries other than the filenames. FlushPending is applicable for SW/IW and primary filesets.

4. This section describes:
mmafmctl Device failover -j FilesetName
             --new-target NewAfmTarget [--target-only] [-s LocalWorkDirectory]  

This option is applicable only for SW/IW filesets. This option pushes all the data from cache to home. It should be used only in case home is completely lost due to a disaster and a new home is being set up. Failover often takes a long time to complete; status can be checked using the afmManualResyncComplete callback or via mmafmctl getstate command.

--new-target NewAfmTarget
Specifies a new home server and path, replacing the home server and path originally set by the afmTarget parameter of the mmcrfileset command. Specified in either of the following formats:
nfs://{Host|Map}/Target_Path
or
gpfs://[Map]/Target_Path
where:
nfs:// or gpfs://
Specifies the transport protocol.
Host|Map
Host
Specifies the server domain name system (DNS) name or IP address.
Map
Specifies the export map name. Information about Mapping is contained in the AFM Overview > Parallel data transfers section.
See the following examples:
  1. An example of using the nfs:// protocol with a map name:
    mmcrfileset fs3 singleWriter2 -p 
            afmtarget=nfs://<map1>/gpfs/fs1/target1 -p afmmode=sw --inode-space new
  2. An example of using the nfs:// protocol with a host name:
    mmcrfileset fs3 singleWriter2 -p
            afmtarget=nfs://<hostname>/gpfs/fs1/target1 -p afmmode=sw --inode-space new
  3. An example of using the gpfs:// protocol without a map name:
    mmcrfileset fs3 singleWriter1 -p 
             afmtarget=gpfs:///gpfs/thefs1/target1 -p afmmode=sw --inode-space new
    Note: If you are not specifying the map name, a '/' is still needed to indicate the path.
  4. An example of using the gpfs:// protocol with a map name:
    mmcrfileset fs3 singleWriter1 -p
            afmtarget=gpfs://<map>/gpfs/thefs1/target1 -p afmmode=sw --inode-space new
Target_Path
Specifies the export path.
It is possible to change the protocol along with the target using failover. For example, a cache using an NFS target bear110:/gpfs/gpfsA/home can be switched to a GPFS target whose remote file system is mounted at /gpfs/fs1, and vice-versa, as follows:
mmafmctl fs0 failover -j afm-mc1 --new-target gpfs:///gpfs/fs1
mmafmctl fs0 failover -j afm-mc1 --new-target nfs://bear110/gpfs/gpfsA/home
Note that in the first command, /// is needed because Host is not provided.
--target-only
Start of changeThis option is used to change the mount path or IP address in the target path. The new NFS server must be in the same home cluster and must have the same architecture as the existing NFS server in the target path. This option must not be used to change the target location or protocol. You must ensure that the new NFS server exports the same target path that has the same FSID.End of change
5. This section describes:
mmafmctl Device prefetch -j FilesetName [-s LocalWorkDirectory]
             [--retry-failed-file-list|--enable-failed-file-list] 
             [--directory LocalDirectoryPath]|
             {--list-fileListFile|--home-list-fileHomeListFile}[--policy]|--home-inode-filePolicyListFile]
             [--home-fs-path HomeFileSystemPath][--metadata-only]
                          

This option is used for pre-fetching file contents from home before the application requests for the contents. This reduces the network delay when the application performs data transfers on file and data that is not in cache. You can also use this option to move files over the WAN when the WAN usage is low. These files might be the files that are accessed during high WAN usage. Thus, you can use this option for better WAN management.

Prefetch is an asynchronous process and you can use the fileset when prefetch is in progress. You can monitor Prefetch using the afmPrepopEnd event. AFM can prefetch the data using the mmafmctl prefetch command (which specifies a list of files to prefetch). Prefetch always pulls the complete file contents from home and AFM automatically sets a file as cached when it is completely prefetched.

You can use the prefetch option to -
  • populate metadata
  • populate data
  • view prefetch statistics

Prefetch completion can be monitored using the afmPrepopEnd event.

--retry-failed-file-list
Allows re-trying prefetch of files that failed in the last prefetch operation. The list of files to re-try is obtained from .afm/.prefetchedfailed.list under the fileset.
Note: To use this option, you must enable generating a list of failed files. Add --enable-failed-file-list to the command first.
--metadata-only
Prefetches only the metadata and not the actual data. This is useful in migration scenarios. This option requires the list of files whose metadata you want. Hence it must be combined with a list file option.
--enable-failed-file-list
Turns on generating a list of files which failed during prefetch operation at the gateway node. The list of files is saved as .afm/.prefetchedfailed.list under the fileset. Failures that occur during processing are not logged in .afm/.prefetchedfailed.list. If you observe any errors during processing (before queuing), you might need to correct the errors and re-run prefetch.
Files listed in .afm/.prefetchedfailed.list are used when prefetch is re-tried.
--policy
Specifies that the list-file or home-list-file is generated using a GPFS Policy by which sequences like '\' or '\n' are escaped as '\\' and '\\n'. If this option is specified, input file list is treated as already escaped. The sequences are unescaped first before queuing for prefetch operation.
Note: This option can be used only if you are specifying list-file or home-list-file.
--directory LocalDirectoryPath
Specifies path to the local directory from which you want to prefetch files. A list of all files in this directory and all its sub-directories is generated, and queued for prefetch.
--dir-list-file DirListfile
Specifies path to the file which contains unique entry of directories under the AFM fileset which needs to be prefetched. This option enables to prefetch individual directories under an AFM fileset. AFM generate a list of all files and sub-directories inside and queued for prefetch. The input file could also be a policy generated file for which user needs to specify --policy.

You can either specify --directory or --dir-list-file option with mmafmctl prefetch.

--policy option can only be used with --dir-list-file and not with --directory.

For example,

mmafmctl fs1 prefetch -j fileset1 --dir-list-file /tmp/file1 --policy --nosubdirs
--nosubdirs
This option restricts the recursive behaviour of --dir-list-file and --directory and prefetch only till the given level of directory. This option will not prefetch the sub-directories under the given directory. This is optional parameter.

This option can only be used with --dir-list-file and --directory.

For example,

#mmafmctl fs1 prefetch -j fileset1 --directory  /gpfs/fs1/fileset1/dir1 --nosubdirs 

#mmafmctl fs1 prefetch -j fileset1 --dir-list-file  /tmp/file1 --policy --nosubdirs
--list-file ListFile
The specified file is a file containing a list of files, and needs to be prefetched, one file per line. All files must have fully qualified path names.

If the list of files to be prefetched have filenames with special characters then a policy must be used to generate the listfile. Remove entries from the file other than the filenames.

An indicative list of files:
  • files with fully qualified names from cache
  • files with fully qualified names from home
  • list of files from home generated using policy. Do not edit.
--home-list-file HomeListFile
Contains a list of files from the home cluster that needs to be prefetched, one file per line. All files must have fully qualified path names. If the list of files has filenames with special characters, use a policy to generate the listfile. Edit to remove all entries other than the filenames.
This command is deprecated. Use -list-file instead.
--home-inode-file PolicyListFile
Contains a list of files from the home cluster that needs to be prefetched in the cache. Do not edit the file. The file is generated using policy.
This command is deprecated. Use -list-file instead.
--home-fs-path HomeFileSystemPath

Specifies the full path to the fileset at the home cluster and can be used in conjunction with ListFile.

You must use this option, when in the NSD protocol the mount point on the gateway nodes of the afmTarget filesets does not match the mount point on the Home cluster.

For example, mmafmctl gpfs1 prefetch -j cache1 –list-file /tmp/list.allfiles --home-fs-path /gpfs/remotefs1

In this example, the file system is mounted on the :
  • home cluster at /gpfs/homefs1
  • gateway nodes at /gpfs/remotefs1

If you run prefetch without providing any options, it displays statistics of the last prefetch command run on the fileset.

If you run the prefetch command with data or metadata options, statistics like queued files, total files, failed files, total data (in Bytes) is displayed as in the following example of command and system output:

#mmafmctl <FileSystem> prefetch -j <fileset> --enable-failed-file-list --list-file /tmp/file-list

mmafmctl: Performing prefetching of fileset: <fileset>
Queued (Total) Failed TotalData (approx in Bytes)
0      (56324) 0      0
5      (56324) 2      1353559
56322  (56324) 2      14119335
6. This section describes:
mmafmctl Device evict -j FilesetName 
            [--safe-limit SafeLimit] [--order {LRU | SIZE}]
            [--log-file LogFile] [--filter Attribute=Value ...]
            [--list-file ListFile] [--file FilePath]

This option is applicable for RO/SW/IW/LU filesets. When cache space exceeds the allocated quota, data blocks from non-dirty are automatically de-allocated with the eviction process. This option can be used for a file that is specifically to be de-allocated based on some criteria. All options can be combined with each other.

--safe-limit SafeLimit
This is a compulsory parameter for the manual evict option, for order and filter attributes. Specifies target quota limit (which is used as the low water mark) for eviction in bytes; must be less than the soft limit. This parameter can be used alone or can be combined with one of the following parameters (order or filter attributes). Specify the parameter in bytes.
--order LRU | SIZE
Specifies the order in which files are to be chosen for eviction:
LRU
Least recently used files are to be evicted first.
SIZE
Larger-sized files are to be evicted first.
--log-file LogFile
Specifies the file where the eviction log is to be stored. The default is that no logs are generated.
--filter Attribute=Value
Specifies attributes that enable you to control how data is evicted from the cache. Valid attributes are:
FILENAME=FileName
Specifies the name of a file to be evicted from the cache. This uses an SQL-type search query. If the same file name exists in more than one directory, it will evict all the files with that name. The complete path to the file should not be given here.
MINFILESIZE=Size
Sets the minimum size of a file to evict from the cache. This value is compared to the number of blocks allocated to a file (KB_ALLOCATED), which may differ slightly from the file size.
MAXFILESIZE=Size
Sets the maximum size of a file to evict from the cache. This value is compared to the number of blocks allocated to a file (KB_ALLOCATED), which may differ slightly from the file size.
--list-file ListFile
Contains a list of files that you want to evict, one file per line. All files must have fully qualified path names. Filesystem quotas need not be specified. If the list of files has filenames with special characters, use a policy to generate the listfile. Edit to remove all entries other than the filenames.
--file FilePath
The fully qualified name of the file that needs to be evicted. Filesystem quotas need not be specified.
Possible combinations of safelimit, order, and filter are:
  • only Safe limit
  • Safe limit + LRU
  • Safe limit + SIZE
  • Safe limit + FILENAME
  • Safe limit + MINFILESIZE
  • Safe limit + MAXFILESIZE
  • Safe limit + LRU + FILENAME
  • Safe limit + LRU + MINFILESIZE
  • Safe limit + LRU + MAXFILESIZE
  • Safe limit + SIZE + FILENAME
  • Safe limit + SIZE + MINFILESIZE
  • Safe limit + SIZE + MAXFILESIZE
7. This section describes:
mmafmctl Device failback -j FilesetName {{--start --failover-time Time} | --stop}
        [-sLocalWorkDirectory]

failback is applicable only for IW filesets.

failback --start --failover-time Time
Specifies the point in time at the home cluster, from which the independent-writer cache taking over as writer should sync up. Time can be specified in date command format with time zones. It will use the cluster's time-zone and year by default.
failback --stop
An option to be run after the failback process is complete and the fileset moves to FailbackCompleted state. This will move the fileset to Active state.
8. This section describes:
mmafmctl Device failoverToSecondary -j FilesetName [--norestore |--restore ]

This is to be run on a secondary fileset.

When primary experiences a disaster, all applications will need to be moved to the secondary to ensure business continuity. The secondary has to be first converted to an acting primary using this option.

There is a choice of restoring the latest snapshot data on the secondary during the failover process or leave the data as is using the --norestore option. Once this is complete, the secondary becomes ready to host applications.

--norestore
Specifies that restoring from the latest RPO snapshot is not required. This is the default setting.
--restore
Specifies that data must be restored from the latest RPO snapshot.
9. This section describes:

mmafmctl Device convertToPrimary -j FilesetName
         [ --afmtarget Target { --inband | --secondary-snapname SnapshotName }]
         [ --check-metadata | --nocheck-metadata ] [--rpo RPO] [-s LocalWorkDirectory]

This is to be run on a GPFS fileset or SW/IW fileset which is intended to be converted to primary.

--afmtargetTarget
Specifies the secondary that needs to be configured for this primary. Need not be used for AFM filesets as target would already have been defined.
--inband
Used for inband trucking. Inband trucking copies data from the primary site to an empty secondary site during conversion of GPFS filesets to AFM DR primary filesets. If you have already copied data to the secondary site, AFM checks mtime of files at the primary and secondary site. Here, granularity of mtime is in microseconds. If mtime values of both files match, data is not copied again and existing data on the secondary site is used. If mtime values of both files do not match, existing data on the secondary site is discarded and data from the primary site is written to the secondary site.
--check-metadata
This is the default option. Checks if the disallowed types (like immutable/append-only files) are present in the GPFS fileset on the primary site before the conversion. Conversion with this option fails if such files exist.
For SW/IW filesets, presence of orphans and incomplete directories are also checked. SW/IW filesets should have established contact with at least once home for this option to succeed.
--nocheck-metadata
Used if one needs to proceed with conversion without checking for appendonly/immutable files.
--secondary-snapname SnapshotName
Used while establishing a new primary for an existing secondary or acting primary during failback.
--rpo RPO
Specifies the RPO interval in minutes for this primary fileset. Disabled by default.
10. This section describes:
mmafmctl Device convertToSecondary -j FilesetName --primaryid  PrimaryId [ --force ]

This is to be run on a GPFS fileset on the secondary site. This converts a GPFS independent fileset to a secondary and sets the primary ID.

--primaryid PrimaryId
Specifies the unique identifier of the AFM-DR primary fileset which needs to be set at AFM-DR Secondary fileset to initiate a relationship. You can obtain this fileset identifier by executing the mmlsfileset command using the --afm and -L options.
For example,
#mmlsfileset <FileSystem> <AFM DR Fileset> -L --afm |grep 'Primary Id'
--force
If convertToSecondary failed or got interrupted, it will not create afmctl file at the secondary. The command should be rerun with the --force option.
11. This section describes:

mmafmctl Device changeSecondary -j FilesetName 
--new-target NewAfmTarget [ --target-only |--inband ] 
         [-s LocalWorkDirectory]

This is to be run on a primary fileset only.

A disaster at the secondary can take place due to which secondary is not available.

Run this command on the primary when a secondary fails and this primary needs to be connected with a new secondary. On the new secondary site a new GPFS independent fileset has to be created. Data on the primary can be copied to the new GPFS fileset that was created with this command using other means such as ftp, scp etc. Alternatively it can be decided that data will be trucked using the relationship.

--new-target NewAfmTarget
Used to mention the new secondary.
--inband
Used for inband trucking. Copies data to a new empty secondary. If you have already copied data to the secondary site, mtime of files at the primary and secondary site is checked. Here, granularity of mtime is in microseconds. If mtime values of both files match, data is not copied again and existing data on the secondary site is used. If mtime values of both files do not match, existing data on the secondary site is discarded and data from the primary site is written to the secondary site.
--target-only

Used when you want to change the IP address or NFS server name for the same target path. The new NFS server must be in the same home cluster and must be of the same architecture(power or x86) as the existing NFS server in the target path. This option can be used to move from NFS to a mapping target.

12. This section describes:
mmafmctl Device replacePrimary -j FilesetName

This is used on an acting primary only. This creates a latest snapshot of the acting primary. This command deletes any old RPO snapshots on the acting primary and creates a new initial RPO snapshot psnap0.

This RPO snapshot is used in the setup of the new primary.

13. This section describes:
mmafmctl Device failbackToPrimary -j FilesetName {--start | --stop} [--force] 

This is to be run on an old primary that came back after the disaster, or on a new primary that is to be configured after an old primary went down with a disaster. The new primary should have been converted from GPFS to primary using convertToPrimary option.

--start
Restores the primary to the contents from the last RPO on the primary before the disaster. This option will put the primary in read-only mode, to avoid accidental corruption until the failback process is completed. In case of new primary that is setup using convertToPrimary, the failback --start does no change.
--stop
Used to complete the Failback process. This will put the fileset in read-write mode. The primary is now ready for starting applications.
--force
Used if --stop or --start does not complete successfully due to any errors, and not allow failbackToPrimary to stop or start again.
14. This section describes:
mmafmctl Device {applyUpdates |getPrimaryId } -j FilesetName 

Both options are intended for the primary fileset.

applyUpdates
Run this on the primary after running the failback --start command. All the differences can be brought over in one go or through multiple iterations. For minimizing application downtime, this command can be run multiple times to bring the primary's contents in sync with the acting primary. When the contents are as close as possible or minimal, applications should take a downtime and then this command should be run one last time.

It is possible that applyUpdates fails with an error during instances when the acting primary is overloaded. In such cases the command has to be run again.

getPrimaryID
Used to get primary Id of a primary fileset.

Exit status

0
Successful completion.
nonzero
A failure has occurred.

Security

You must have root authority to run the mmafmctl command.

The node on which the command is issued must be able to execute remote shell commands on any other node in the cluster without the use of a password and without producing any extraneous messages. For more information, see Requirements for administering a GPFS file system.

Examples

  1. Running resync on SW:
    
    # mmafmctl fs1 resync -j sw1
    mmafmctl: Performing resync of fileset: sw1
    
    # mmafmctl fs1 getstate -j sw1
    Fileset Name Fileset Target                       Cache State Gateway Node Queue Length Queue numExec
    ------------ --------------                       ----------- ------------ ------------ -------------
    sw1          nfs://c26c3apv2/gpfs/homefs1/newdir1 Dirty       c26c2apv1    4067         10844
    
  2. Expiring a RO fileset:
    
    # mmafmctl fs1 expire -j ro1
    
    # mmafmctl fs1 getstate -j ro1
    Fileset Name Fileset Target              Cache State Gateway Node Queue Length Queue numExec
    ------------ --------------              ----------- ------------ ------------ -------------
    ro1          gpfs:///gpfs/remotefs1/dir1 Expired     c26c4apv1    0            4
    
  3. Unexpiring a RO fileset:
    
    # mmafmctl fs1 unexpire -j ro1
    
    # mmafmctl fs1 getstate -j ro1
    Fileset Name Fileset Target              Cache State Gateway Node Queue Length Queue numExec
    ------------ --------------              ----------- ------------ ------------ -------------
    ro1          gpfs:///gpfs/remotefs1/dir1 Active      c26c4apv1    0            4
    
  4. Run flushPending on SW fileset:
    
    // Populate the fileset with data
    
    # mmafmctl fs1 getstate -j sw1
    Fileset Name Fileset Target              Cache State Gateway Node Queue Length Queue numExec
    ------------ --------------              ----------- ------------ ------------ -------------
    sw1          gpfs:///gpfs/remotefs1/dir1 Dirty       c26c2apv1    5671         293
    
    Get the list of files newly created using policy:
    RULE EXTERNAL LIST 'L' RULE 'List' LIST 'L' WHERE PATH_NAME LIKE '%'
    
    # mmapplypolicy /gpfs/fs1/sw1/migrateDir.popFSDir.22655 -P p1 -f p1.res -L 1 -N mount -I defer
    
    Policy created this file, this should be hand-edited to retain only the names:
    11012030 65537 0 -- /gpfs/fs1/sw1/migrateDir.popFSDir.22655/file_with_posix_acl1
    11012032 65537 0 -- /gpfs/fs1/sw1/migrateDir.popFSDir.22655/populateFS.log
    11012033 65537 0 --
    /gpfs/fs1/sw1/migrateDir.popFSDir.22655/sparse_file_0_with_0_levels_indirection
    
    # cat p1.res.list | awk '{print $5}' > /lfile
    
    # mmafmctl fs1 flushPending -j sw1 --list-file=/lfile
    
  5. Failover of SW to a new home:
    
    # mmafmctl fs1 getstate -j sw1
    Fileset Name Fileset Target              Cache State Gateway Node Queue Length Queue numExec
    ------------ --------------              ----------- ------------ ------------ -------------
    sw1          gpfs:///gpfs/remotefs1/dir1 Dirty       c26c2apv1    785          5179
    
    At home -
    
    # mmcrfileset homefs1 newdir1 --inode-space=new
    Fileset newdir1 created with id 219 root inode 52953091.
    
    # mmlinkfileset homefs1 newdir1 -J /gpfs/homefs1/newdir1
    Fileset newdir1 linked at /gpfs/homefs1/newdir1
    
    # mmafmconfig enable /gpfs/homefs1/newdir1 
    
    At cache -
    
    # mmafmctl fs1 failover -j sw1 --new-target=c26c3apv1:/gpfs/homefs1/newdir1
    mmafmctl: Performing failover to nfs://c26c3apv1/gpfs/homefs1/newdir1
    Fileset sw1 changed.
    mmafmctl: Failover in progress. This may take while...
     Check fileset state or register for callback to know the completion status.
    
    
    Callback registered, logged into mmfs.log:
    Thu May 21 03:06:18.303 2015: [I] Calling User Exit Script callback7: event
    afmManualResyncComplete, Async command recovery.sh
    
    
    # mmafmctl fs1 getstate -j sw1
    Fileset Name Fileset Target                       Cache State Gateway Node Queue Length Queue numExec
    ------------ --------------                       ----------- ------------ ------------ -------------
    sw1          nfs://c26c3apv1/gpfs/homefs1/newdir1 Active      c26c2apv1    0            5250
    
  6. Changing target of SW fileset:
    
    Changing to another NFS server in the same home cluster using --target-only option:
    
    # mmafmctl fs1 failover -j sw1 --new-target=c26c3apv2:/gpfs/homefs1/newdir1 --target-only
    mmafmctl: Performing failover to nfs://c26c3apv2/gpfs/homefs1/newdir1
    Fileset sw1 changed.
    
    
    # mmafmctl fs1 getstate -j sw1
    Fileset Name Fileset Target                       Cache State Gateway Node Queue Length Queue numExec
    ------------ --------------                       ----------- ------------ ------------ -------------
    sw1          nfs://c26c3apv2/gpfs/homefs1/newdir1 Active      c26c2apv1    0            5005
    
  7. Metadata population using prefetch:
    
    # mmafmctl fs1 getstate -j ro
    Fileset Name Fileset Target                    Cache State Gateway Node Queue Length Queue numExec
    ------------ --------------                    ----------- ------------ ------------ -------------
    ro           nfs://c26c3apv1/gpfs/homefs1/dir3 Active      c26c2apv2    0            7
    
    
    List Policy:
    RULE EXTERNAL LIST 'List' RULE 'List' LIST 'List' WHERE PATH_NAME LIKE '%'
     
    Run the policy at home:
    mmapplypolicy /gpfs/homefs1/dir3 -P px -f px.res -L 1 -N mount -I defer
    
    Policy created this file, this should be hand-edited to retain only file names.
    
    This file can be used at the cache to populate metadata.
    
    
    
    # mmafmctl fs1 prefetch -j ro --metadata-only -list-file=px.res.list.List
    mmafmctl: Performing prefetching of fileset: ro
         Queued    (Total)      Failed               TotalData
                                                     (approx in Bytes) 
              0    (2)          0                    0
            100    (116)        5                    1368093971
            116    (116)        5                    1368093971
    
    prefetch successfully queued at the gateway
    
    Prefetch end can be monitored using this event:
    Thu May 21 06:49:34.748 2015: [I] Calling User Exit Script prepop: event afmPrepopEnd,
    Async command prepop.sh.
    
     
    The statistics of the last prefetch command is viewed by:
    
    # mmafmctl fs1 prefetch -j ro
    Fileset Name Async Read (Pending) Async Read (Failed) Async Read (Already Cached) Async Read (Total) Async Read (Data in Bytes)
    ------------ -------------------- ------------------  --------------------------- ------------------ --------------------------
    ro           0                    1                   0                            7                 0
    
  8. Prefetch of data using --home-list-file option:
    
    # cat /lfile1
    /gpfs/homefs1/dir3/file1
    /gpfs/homefs1/dir3/dir1/file1
    
    # mmafmctl fs1 prefetch -j ro --home-list-file=/lfile1
    mmafmctl: Performing prefetching of fileset: ro
    Queued    (Total)      Failed          TotalData
                                           (approx in Bytes) 
     0         (2)         0               0
     100       (116)       5               1368093971
     116       (116)       5               1368093971
    
    prefetch successfully queued at the gateway
    
    
    
    # mmafmctl fs1 prefetch -j ro
    Fileset Name Async Read (Pending) Async Read (Failed) Async Read (Already Cached) Async Read (Total) Async Read (Data in Bytes)
    ------------ -------------------- ------------------  --------------------------- ------------------ --------------------------
    ro           0                    0                   0                           2                  122880
    
  9. Prefetch of data using --home-inode-file option:
    
    Inode file is created using the above policy at home, and should be used as such without
    hand-editing.
    
    List Policy:
    RULE EXTERNAL LIST 'List' RULE 'List' LIST 'List' WHERE PATH_NAME LIKE '%'
    
    Run the policy at home:
    # mmapplypolicy /gpfs/homefs1/dir3 -P px -f px.res -L 1 -N mount -I defer
    
    # cat /lfile2
    113289 65538 0 -- /gpfs/homefs1/dir3/file2
    113292 65538 0 -- /gpfs/homefs1/dir3/dir1/file2
    
    # mmafmctl fs1 prefetch -j ro2 --home-inode-file=/lfile2
    mmafmctl: Performing prefetching of fileset: ro2
         Queued    (Total)      Failed         TotalData
                                               (approx in Bytes) 
         0         (2)          0               0
         2         (2)          0               122880
    
    prefetch successfully queued at the gateway
    
    
     mmafmctl fs1 prefetch -j ro
    Fileset Name Async Read (Pending) Async Read (Failed) Async Read (Already Cached) Async Read (Total) Async Read (Data in Bytes)
    ------------ -------------------- ------------------  --------------------------- ------------------ --------------------------
    ro           0                    0                   2                           2                  0
    
  10. Using --home-fs-path option for a target with NSD protocol:
    
    # mmafmctl fs1 getstate -j ro2 
    Fileset Name Fileset Target              Cache State Gateway Node Queue Length Queue numExec
    ------------ --------------              ----------- ------------ ------------ -------------
    ro2          gpfs:///gpfs/remotefs1/dir3 Active      c26c4apv1    0            7
    
    
    # cat /lfile2
    113289 65538 0 -- /gpfs/homefs1/dir3/file2
    113292 65538 0 -- /gpfs/homefs1/dir3/dir1/file2
    
    # mmafmctl fs1 prefetch -j ro2 --home-inode-file=/lfile2 --home-fs-path=/gpfs/homefs1/dir3
    mmafmctl: Performing prefetching of fileset: ro2
    Queued    (Total)      Failed          TotalData
                                           approx in Bytes) 
    0         (2)          0               0
    2         (2)          0               122880
    
    prefetch successfully queued at the gateway
    
    
    
    
    # mmafmctl fs1 prefetch -j ro2
    Fileset Name Async Read (Pending) Async Read (Failed) Async Read (Already Cached) Async Read (Total) Async Read (Data in Bytes)
    ------------ -------------------- ------------------  --------------------------- ------------------ --------------------------
    ro2          0                    0                   0                           2                  122880
    
  11. Manually evicting using safe-limit and filename parameters:
    
    # ls -lis /gpfs/fs1/ro2/file10M_1
    12605961 10240 -rw-r--r-- 1 root root 10485760 May 21 07:44 /gpfs/fs1/ro2/file10M_1
    
    # mmafmctl fs1 evict -j ro2 --safe-limit=1 --filter FILENAME=file10M_1
    
    # ls -lis /gpfs/fs1/ro2/file10M_1
    12605961 0 -rw-r--r-- 1 root root 10485760 May 21 07:44 /gpfs/fs1/ro2/file10M_1
    
  12. IW Failback:
    
    # mmafmctl fs1 getstate -j iw1
    Fileset Name Fileset Target                    Cache State Gateway Node Queue Length Queue numExec
    ------------ --------------                    ----------- ------------ ------------ -------------
    iw1          nfs://c26c3apv1/gpfs/homefs1/dir3 Active      c25m4n03     0            8
     
    
    # touch file3 file4
    
    # mmafmctl fs1 getstate -j iw1
    Fileset Name Fileset Target                    Cache State Gateway Node Queue Length Queue numExec
    ------------ --------------                    ----------- ------------ ------------ -------------
    iw1          nfs://c26c3apv1/gpfs/homefs1/dir3 Dirty       c25m4n03     2            11
    
     
    Unlink IW fileset feigning failure:
    # mmunlinkfileset fs1 iw1 -f
    Fileset iw1 unlinked.
    
    Write from IW home, assuming applications failed over to home:
    Thu May 21 08:20:41 4]dir3# touch file5 file6
    Relink IW back on the cache cluster, assuming it came back up:
    # mmlinkfileset fs1 iw1 -J /gpfs/fs1/iw1
    Fileset iw1 linked at /gpfs/fs1/iw1
    
    Run failback on IW:
    # mmafmctl fs1 failback -j iw1 --start --failover-time='May 21 08:20:41'
    
    # mmafmctl fs1 getstate -j iw1
    Fileset Name Fileset Target                    Cache State    Gateway Node Queue Length Queue numExec
    ------------ --------------                    -----------    ------------ ------------ -------------
    iw1          nfs://c26c3apv1/gpfs/homefs1/dir3 FailbackInProg c25m4n03     0            0
    
    # mmafmctl fs1 failback -j iw1 -stop
    
    # mmafmctl fs1 getstate -j iw1
    Fileset Name Fileset Target                    Cache State Gateway Node Queue Length Queue numExec
    ------------ --------------                    ----------- ------------ ------------ -------------
    iw1          nfs://c26c3apv1/gpfs/homefs1/dir3 Active      c25m4n03     0            3
    
  13. Manual evict using the --list-file option:
    # ls -lshi /gpfs/fs1/evictCache
    total 6.0M
    27858308 1.0M -rw-r--r--. 1 root root 1.0M Feb  5 02:07 file1M
    27858307 2.0M -rw-r--r--. 1 root root 2.0M Feb  5 02:07 file2M
    27858306 3.0M -rw-r--r--. 1 root root 3.0M Feb  5 02:07 file3M
    
    # echo "RULE EXTERNAL LIST 'HomePREPDAEMON' RULE 'ListLargeFiles'
     LIST 'HomePREPDAEMON' WHERE PATH_NAME LIKE '%'" > /tmp/evictionPolicy.pol
    
    
    # mmapplypolicy /gpfs/fs1/evictCache -I defer -P /tmp/evictionPolicy.pol 
    -f /tmp/evictionList
    
    #Edited list of files to be evicted
    [root@c21f2n08 ~]# cat /tmp/evictionList.list.HomePREPDAEMON
    27858306 605742886 0   -- /gpfs/fs1/evictCache/file3M
    
    # mmafmctl fs1 evict -j evictCache --list-file /tmp/evictionList.list.HomePREPDAEMON
    
    # ls -lshi /gpfs/fs1/evictCache
    total 3.0M
    27858308 1.0M -rw-r--r--. 1 root root 1.0M Feb  5 02:07 file1M
    27858307 2.0M -rw-r--r--. 1 root root 2.0M Feb  5 02:07 file2M
    27858306    0 -rw-r--r--. 1 root root 3.0M Feb  5 02:07 file3M
  14. Manual evict using the --file option:
    # ls -lshi /gpfs/fs1/evictCache
    total 3.0M
    27858308 1.0M -rw-r--r--. 1 root root 1.0M Feb  5 02:07 file1M
    27858307 2.0M -rw-r--r--. 1 root root 2.0M Feb  5 02:07 file2M
    27858306    0 -rw-r--r--. 1 root root 3.0M Feb  5 02:07 file3M
    
    # mmafmctl fs1 evict -j evictCache  --file /gpfs/fs1/evictCache/file1M
    
    # ls -lshi /gpfs/fs1/evictCache
    total 0
    27858308 0 -rw-r--r--. 1 root root 1.0M Feb  5 02:07 file1M
    27858307 0 -rw-r--r--. 1 root root 2.0M Feb  5 02:07 file2M
    27858306 0 -rw-r--r--. 1 root root 3.0M Feb  5 02:07 file3M

See also

Location

/usr/lpp/mmfs/bin