mmafmctl command

This command is for various operations and reporting information on all filesets. It is recommended to read the IBM Spectrum Scale: Advanced Administration Guide AFM and AFM Disaster Recovery chapters in conjunction with this manual for detailed description of the functions.

Synopsis

To use the AFM Disaster Recovery functions correctly, it is strongly recommended to use all commands enlisted in this chapter in accordance with the steps described in the AFM Disaster Recovery chapter in IBM Spectrum Scale: Advanced Administration Guide.

AFM read only mode is referred as RO, single writer mode is referred as SW, independent writer mode is referred as IW and local update mode is referred as LU in this manual.

mmafmctl Device {resync | expire | unexpire} -j FilesetName 

or

mmafmctl Device {getstate | resumeRequeued} [-j FilesetName]  

or

mmafmctl Device flushPending [-j FilesetName [--list-file ListFile]]
             [-s LocalWorkDirectory]  

or

mmafmctl Device failover -j FilesetName
             --new-target NewAfmTarget [--target-only] [-s LocalWorkDirectory]  

or

mmafmctl Device prefetch -j FilesetName [--metadata-only]
             [{--list-file ListFile} | 
              {--home-list-file HomeListFile} | 
              {--home-inode-file PolicyListFile}]
             [--home-fs-path HomeFileSystemPath]
             [-s LocalWorkDirectory]             

or

mmafmctl Device evict -j FilesetName 
            [--safe-limit SafeLimit] [--order {LRU | SIZE}]
            [--log-file LogFile] [--filter Attribute=Value ...]
             

or

mmafmctl Device failback -j FilesetName {{--start --failover-time Time} | --stop}
        [-sLocalWorkDirectory]

or

mmafmctl Device failoverToSecondary -j FilesetName [--norestore |--restore ]

or

mmafmctl Device convertToPrimary -j FilesetName
         [ --afmtarget Target { --inband | --outband | --secondary-snapname SnapshotName }]
         [ --check-metadata | --nocheck-metadata ] [--rpo RPO] [-s LocalWorkDirectory]

or

mmafmctl Device convertToSecondary -j FilesetName --primaryid  PrimaryId [ --force ]

or

mmafmctl Device changeSecondary -j FilesetName 
--new-target NewAfmTarget [ --target-only |--inband | --outband ] 
         [-s LocalWorkDirectory]

or

mmafmctl Device replacePrimary -j FilesetName

or

mmafmctl Device failbackToPrimary -j FilesetName {--start | --stop [ --force ] }

or

mmafmctl Device {applyUpdates |getPrimaryId } -j FilesetName 

Availability

Available with IBM Spectrum Scale™ Standard Edition or higher. Available on AIX® and Linux.

Description

The usage of options of this command for different operations on both AFM (RO/SW/IW/LU) filesets and AFM primary/secondary filesets are explained with examples.

File system should be mounted on all gateway nodes for mmafmctl functions to work.

Parameters

Device
Specifies the device name of the file system.
-j FilesetName
Specifies the fileset name.
-s LocalWorkDirectory
Specifies the temporary working directory.
1. This section describes:
mmafmctl Device {resync | expire | unexpire} -j FilesetName 
resync
This option is available only for SW cache. In case of inadvertent changes made at home of an SW fileset, such as delete of a file or change of data in a file etc., the administrator can correct the home by sending all contents from cache to home using this option. The limitation of this option that renamed files at home may not be fixed by resync. Using resync requires the cache to be either in NeedsResync or Active state.
expire | unexpire
This option is available only for RO cache. When an RO cache is disconnected, the cached contents are still accessible for the user. However the administrator can define a time from home beyond which access to the cached contents becomes stale. Such an event would occur automatically after disconnection (when cached contents are no longer accessible) and is called expiration; the cache is said to be expired. This state can also be forced manually using the expire parameter.

When the home comes back or reconnects, the cache contents become automatically accessible again and the cache is said to un-expire. This can be forced manually using the unexpire parameter.

The manual expiration and un-expiration can be forced on a cache even when the home is in a connected state. For expiring a fileset manually the afmExpirationTimeout needs to have been set on the fileset. If a cache is expired using this manual method, it will also have to be manually unexpired.

2. This section describes:
mmafmctl Device {getstate | resumeRequeued} [-j FilesetName]  
getstate
This option is applicable for all AFM (RO/SW/IW/LU) and AFM primary filesets. It displays the status of the fileset in the following fields:
Fileset Name
The name of the fileset.
Fileset Target
The host server and the exported path on it.
Gateway Node
Metadata server or MDS of the fileset. This gateway node is handling requests for this fileset.
Queue Length
Current length of the queue on the MDS.
Queue numExec
Number of operations played at home since the fileset is last Active.
Cache State
  • Cache states applicable for all AFM RO/SW/IW/LU filesets:

    Active, Inactive, Dirty, Disconnected, Unmounted

  • Cache states applicable for RO filesets:

    Expired

  • Cache states applicable for SW and IW filesets:

    Recovery, FlushOnly, QueueOnly, Dropped, NeedsResync, FailoverInProgress

  • Cache states applicable for IW filesets:

    FailbackInProgress, FailbackCompleted, NeedsFailback

  • Cache states applicable for AFM primary filesets:

    PrimInitInProg, PrimInitFail, Active, Inactive, Dirty, Disconnected, Unmounted, FailbackInProg, Recovery, FlushOnly, QueueOnly, Dropped, NeedsResync

All cache states are explained in AFM and AFM Disaster Recovery chapters in the IBM Spectrum Scale: Advanced Administration Guide. Please refer to them for more information.

resumeRequeued
This option is applicable for SW/IW and primary filesets. If there are operations in the queue that were re-queued due to errors at home, the Administrator should correct those errors and can run this option to retry the re-queued operations.
3. This section describes:
mmafmctl Device flushPending [-j FilesetName [--list-file ListFile]]
             [-s LocalWorkDirectory]  
flushPending
Flushes all point-in-time pending messages in the normal queue on the fileset to home. Requeued messages and messages in the priority queue for the fileset are not flushed by this command.

When --list-file ListFile is specified, the messages pending on the files listed in the list file are flushed to home.

4. This section describes:
mmafmctl Device failover -j FilesetName
             --new-target NewAfmTarget [--target-only] [-s LocalWorkDirectory]  

This option is applicable only for SW/IW filesets. This option pushes all the data from cache to home. It should be used only in case home is completely lost due to a disaster and a new home is being set up. Failover often takes a long time to complete; status can be checked using the afmManualResyncComplete callback or via mmafmctl getstate command.

--new-target NewAfmTarget
Specifies a new home server and path, replacing the home server and path originally set by the afmTarget parameter of the mmcrfileset command. Specified in either of the following formats:
Protocol://[Host|Map]/Path
or
{Host|Map}:Path
where:
Protocol://
Specifies the transport protocol. Valid values are nfs:// or gpfs://.
Host|Map
Host
Specifies the server domain name system (DNS) name or IP address.
Map
Specifies the export map name.
Notes:
  1. When specifying nfs:// as the value for Protocol://, you must provide a value for Host or Map.
  2. When specifying gpfs:// as the value for Protocol://, do not provide a value for Host. However, provide a value for Map if it refers to an export map entry.
Path
Specifies the export path.
It is possible to change the protocol along with the target using failover. For example, a cache using an NFS target bear110:/gpfs/gpfsA/home can be switched to a GPFS™ target whose remote file system is mounted at /gpfs/fs1, and vice-versa, as follows:
mmafmctl fs0 failover -j afm-mc1 --new-target gpfs:///gpfs/fs1
mmafmctl fs0 failover -j afm-mc1 --new-target nfs://bear110/gpfs/gpfsA/home
Note that in the first command, /// is needed because Host is not provided.
--target-only
This is used if the user wants to change the mount path/IP address in the target path. The new NFS server should be in the same home cluster and should be of the same architecture as the existing NFS server in the target path. This option should not be used to change the target location or protocol.
5. This section describes:
mmafmctl Device prefetch -j FilesetName [--metadata-only]
             [{--list-file ListFile} | 
             [--home-fs-path HomeFileSystemPath]
             [-s LocalWorkDirectory]             

This option is used for fetching file contents from home before the application requests for the contents. This reduces the network delay when the application is in progress. You can also use this option to move files over the WAN when the WAN usage is low. These files might be the files that are accessed during high WAN usage. Thus, you can use this option for better WAN management.

You can use the prefetch option to -
  • populate data
  • populate metadata
  • view prefetch statistics

If you run prefetch without providing any options, it displays statistics of the last prefetch command run on the fileset.

Prefetch completion can be monitored using the afmPrepopEnd event.

--metadata-only
Prefetches only the metadata and not the actual data. This is useful in migration scenarios. This option requires the list of files whose metadata you want. Hence it must be combined with a list file option.
--list-file ListFile
The specified file is a file containing a list of files, and needs to be prefetched, one file per line. All files must have fully qualified path names.

If the list of files to be prefetched have filenames with special characters then a policy must be used to generate the listfile. Remove entries from the file other than the filenames.

An indicative list of files:
  • files with fully qualified names from cache
  • files with fully qualified names from home
  • list of files from home generated using policy. Do not edit.
--home-list-file HomeListFile
Contains a list of files from the home cluster that needs to be prefetched, one file per line. All files must have fully qualified path names. If the list of files has filenames with special characters, use a policy to generate the listfile. Edit to remove all entries other than the filenames.
This command is deprecated. Use -list-file instead.
--home-inode-file PolicyListFile
Contains a list of files from the home cluster that needs to be prefetched in the cache. Do not edit the file. The file is generated using policy.
This command is deprecated. Use -list-file instead.
--home-fs-path HomeFileSystemPath

Specifies the full path to the fileset at the home cluster and can be used in conjunction with ListFile.

You must use this option when the mount point on the gateway nodes of the afmTarget filesets does not match with the mount point on the GPFS home cluster.

For example, mmafmctl gpfs1 prefetch -j cache1 –list-file /tmp/list.allfiles --home-fs-path /gpfs/remotefs1

In this example, the file system is mounted on the :
  • home cluster at /gpfs/homefs1
  • gateway nodes at /gpfs/remotefs1

Prefetch is an asynchronous process and you can use the fileset when prefetch is in progress. You can monitor Prefetch using the afmPrepopEnd event. AFM can prefetch the data using the mmafmctl prefetch command (which specifies a list of files to prefetch). Prefetch always pulls the complete file contents from home and AFM automatically sets a file as cached when it is completely prefetched.

6. This section describes:
mmafmctl Device evict -j FilesetName 
            [--safe-limit SafeLimit] [--order {LRU | SIZE}]
            [--log-file LogFile] [--filter Attribute=Value ...]
             

This option is applicable for RO/SW/IW/LU filesets. When cache space exceeds the allocated quota, data blocks from non-dirty are automatically de-allocated with the eviction process. This option can be used for a file that is specifically to be de-allocated based on some criteria. All options can be combined with each other.

--safe-limit SafeLimit
This is a compulsory parameter for the manual evict option, for order and filter attributes. Specifies target quota limit (which is used as the low water mark) for eviction in bytes; must be less than the soft limit. This parameter can be used alone or can be combined with one of the following parameters (order or filter attributes). Specify the parameter in bytes.
--order LRU | SIZE
Specifies the order in which files are to be chosen for eviction:
LRU
Least recently used files are to be evicted first.
SIZE
Larger-sized files are to be evicted first.
--log-file LogFile
Specifies the file where the eviction log is to be stored. The default is that no logs are generated.
--filter Attribute=Value
Specifies attributes that enable you to control how data is evicted from the cache. Valid attributes are:
FILENAME=FileName
Specifies the name of a file to be evicted from the cache. This uses an SQL-type search query. If the same file name exists in more than one directory, it will evict all the files with that name. The complete path to the file should not be given here.
MINFILESIZE=Size
Sets the minimum size of a file to evict from the cache. This value is compared to the number of blocks allocated to a file (KB_ALLOCATED), which may differ slightly from the file size.
MAXFILESIZE=Size
Sets the maximum size of a file to evict from the cache. This value is compared to the number of blocks allocated to a file (KB_ALLOCATED), which may differ slightly from the file size.
Possible combinations of safelimit,order, and filter are:
  • only Safe limit
  • Safe limit + LRU
  • Safe limit + SIZE
  • Safe limit + FILENAME
  • Safe limit + MINFILESIZE
  • Safe limit + MAXFILESIZE
  • Safe limit + LRU + FILENAME
  • Safe limit + LRU + MINFILESIZE
  • Safe limit + LRU + MAXFILESIZE
  • Safe limit + SIZE + FILENAME
  • Safe limit + SIZE + MINFILESIZE
  • Safe limit + SIZE + MAXFILESIZE
7. This section describes:
mmafmctl Device failback -j FilesetName {{--start --failover-time Time} | --stop}
        [-sLocalWorkDirectory]

failback is applicable only for IW filesets.

failback --start --failover-time Time
Specifies the point in time at the home cluster, from which the independent-writer cache taking over as writer should sync up. Time can be specified in date command format with time zones. It will use the cluster's time-zone and year by default.
failback --stop
An option to be run after the failback process is complete and the fileset moves to FailbackCompleted state. This will move the fileset to Active state.
8. This section describes:
mmafmctl Device failoverToSecondary -j FilesetName [--norestore |--restore ]

This is to be run on a secondary fileset.

When primary experiences a disaster, all applications will need to be moved to the secondary to ensure business continuity. The secondary has to be first converted to an acting primary using this option.

There is a choice of restoring the latest snapshot data on the secondary during the failover process or leave the data as is using the --norestore option. Once this is complete, the secondary becomes ready to host applications.

--norestore
Specifies that restoring from the latest peer snapshot is not required.
--restore
Specifies that the restoring of data is to be done from the latest peer snapshot. This is the default.
9. This section describes:
mmafmctl Device convertToPrimary -j FilesetName
         [ --afmtarget Target { --inband | --outband | --secondary-snapname SnapshotName }]
         [ --check-metadata | --nocheck-metadata ] [--rpo RPO] [-s LocalWorkDirectory]

This is to be run on a GPFS fileset or SW/IW fileset which is intended to be converted to primary.

--afmtargetTarget
Specifies the secondary that needs to be configured for this primary. Need not be used for AFM filesets as target would already have been defined.
--inband
Used for inband trucking. Inband trucking is copying the data while setting up a primary/secondary relationship from GPFS filesets, where primary site has contents and secondary site is empty.
--outband
Used for outband trucking. Outband trucking is copying data manually using other ways such as ftp, scp, rsync etc. This should be completed before the relationship is established.
--check-metadata
This checks if the disallowed types (like immutable/append-only files) are present in the GPFS fileset on the primary site before the conversion. Conversion with this option fails if such files exist. For SW/IW filesets, presence of orphans and incomplete directories are also checked. SW/IW filesets should have established contact with at least once home for this option to succeed.
--nocheck-metadata
Used if one needs to proceed with conversion without checking for appendonly/immutable files.
--secondary-snapname SnapshotName
Used while establishing a new primary for an existing secondary or acting primary during failback.
--rpo RPO
Specifies the RPO interval in minutes for this primary fileset.
10. This section describes:
mmafmctl Device convertToSecondary -j FilesetName --primaryid  PrimaryId [ --force ]

This is to be run on a GPFS fileset on the secondary site. This converts a GPFS independent fileset to a secondary and sets the primary ID.

--primaryid PrimaryId
Specifies the ID of the primary with which the secondary will be associated.
--force
If convertToSecondary failed or got interrupted, it will not create afmctl file at the secondary. The command should be rerun with the --force option.
11. This section describes:
mmafmctl Device changeSecondary -j FilesetName 
--new-target NewAfmTarget [ --target-only |--inband | --outband ] 
         [-s LocalWorkDirectory]

This is to be run on a primary fileset only.

A disaster at the secondary can take place due to which secondary is not available.

Run this command on the primary when a secondary fails and this primary needs to be connected with a new secondary. On the new secondary site a new GPFS independent fileset has to be created. Data on the primary can be copied to the new GPFS fileset that was created with this command using other means such as ftp, scp etc. Alternatively it can be decided that data will be trucked using the relationship.

--new-target NewAfmTarget
Used to mention the new secondary.
--inband | --outband
Used based on the method used to truck data.
--target-only

Used when you want to change the IP address or NFS server name for the same target path. The new NFS server must be in the same home cluster and must be of the same architecture(power or x86) as the existing NFS server in the target path. This option can be used to move from NFS to a mapping target.

12. This section describes:
mmafmctl Device replacePrimary -j FilesetName

This is used on an acting primary only. This will create a latest snapshot of the acting primary. This command deletes any old peer snapshots on the acting primary and creates a new initial peer snapshot psnap0.

This snapshot will be used in the setup of the new primary.

13. This section describes:
mmafmctl Device failbackToPrimary -j FilesetName {--start | --stop [ --force ] }

This is to be run on an old primary that came back after the disaster, or on a new primary that is to be configured after an old primary went down with a disaster. The new primary should have been converted from GPFS to primary using convertToPrimary option.

--start
Restores the primary to the contents from the last RPO on the primary before the disaster. This option will put the primary in read-only mode, to avoid accidental corruption until the failback process is completed. In case of new primary that is setup using convertToPrimary, the failback --start does no change.
--stop
Used to complete the Failback process. This will put the fileset in read-write mode. The primary is now ready for starting applications.
--force
Used if --stop option does not complete due to errors and not allow for failback to be stopped.
14. This section describes:
mmafmctl Device {applyUpdates |getPrimaryId } -j FilesetName 

Both options are intended for the primary fileset.

applyUpdates
Run this on the primary after running the failback --start command. All the differences can be brought over in one go or through multiple iterations. For minimizing application downtime, this command can be run multiple times to bring the primary's contents in sync with the acting primary. When the contents are as close as possible or minimal, applications should take a downtime and then this command should be run one last time.

It is possible that applyUpdates fails with an error during instances when the acting primary is overloaded. In such cases the command has to be run again.

getPrimaryID
Used to get primary Id of a primary fileset.

Exit status

0
Successful completion.
nonzero
A failure has occurred.

Security

You must have root authority to run the mmafmctl command.

The node on which the command is issued must be able to execute remote shell commands on any other node in the cluster without the use of a password and without producing any extraneous messages. See the following IBM Spectrum Scale: Administration and Programming Reference topic: Requirements for administering a GPFS file system.

Examples

  1. running resync on SW:
    # mmafmctl fs1 resync -j sw1
    mmafmctl: Performing resync of fileset: sw1
    
    # mmafmctl fs1 getstate -j sw1
    Fileset Name Fileset Target                       Cache State Gateway Node Queue Length Queue numExec
    ------------ --------------                       ----------- ------------ ------------ -------------
    sw1          nfs://c26c3apv2/gpfs/homefs1/newdir1 Dirty       c26c2apv1    4067         10844
  2. Expiring a RO fileset:
    # mmafmctl fs1 expire -j ro1
    
    # mmafmctl fs1 getstate -j ro1
    Fileset Name Fileset Target              Cache State Gateway Node Queue Length Queue numExec
    ------------ --------------              ----------- ------------ ------------ -------------
    ro1          gpfs:///gpfs/remotefs1/dir1 Expired     c26c4apv1    0            4
  3. Unexpiring a RO fileset:
    # mmafmctl fs1 unexpire -j ro1
    
    # mmafmctl fs1 getstate -j ro1
    Fileset Name Fileset Target              Cache State Gateway Node Queue Length Queue numExec
    ------------ --------------              ----------- ------------ ------------ -------------
    ro1          gpfs:///gpfs/remotefs1/dir1 Active      c26c4apv1    0            4
  4. Run flushPending on SW fileset:
    // Populate the fileset with data
    
    # mmafmctl fs1 getstate -j sw1
    Fileset Name Fileset Target              Cache State Gateway Node Queue Length Queue numExec
    ------------ --------------              ----------- ------------ ------------ -------------
    sw1          gpfs:///gpfs/remotefs1/dir1 Dirty       c26c2apv1    5671         293
    
    Get the list of files newly created using policy:
    RULE EXTERNAL LIST 'L' RULE 'List' LIST 'L' WHERE PATH_NAME LIKE '%'
    
    # mmapplypolicy /gpfs/fs1/sw1/migrateDir.popFSDir.22655 -P p1 -f p1.res -L 1 -N mount -I defer
    
    Policy created this file, this should be hand-edited to retain only the names:
    11012030 65537 0 -- /gpfs/fs1/sw1/migrateDir.popFSDir.22655/file_with_posix_acl1
    11012032 65537 0 -- /gpfs/fs1/sw1/migrateDir.popFSDir.22655/populateFS.log
    11012033 65537 0 --
    /gpfs/fs1/sw1/migrateDir.popFSDir.22655/sparse_file_0_with_0_levels_indirection
    
    # cat p1.res.list | awk '{print $5}' > /lfile
    
    # mmafmctl fs1 flushPending -j sw1 --list-file=/lfile
  5. Failover of SW to a new home:
    # mmafmctl fs1 getstate -j sw1
    Fileset Name Fileset Target              Cache State Gateway Node Queue Length Queue numExec
    ------------ --------------              ----------- ------------ ------------ -------------
    sw1          gpfs:///gpfs/remotefs1/dir1 Dirty       c26c2apv1    785          5179
    
    
    # mmcrfileset homefs1 newdir1 --inode-space=new
    Fileset newdir1 created with id 219 root inode 52953091.
    
    # mmlinkfileset homefs1 newdir1 -J /gpfs/homefs1/newdir1
    Fileset newdir1 linked at /gpfs/homefs1/newdir1
    
    # mmafmconfig /gpfs/homefs1/newdir1 enable
    
    # mmafmctl fs1 failover -j sw1 --new-target=c26c3apv1:/gpfs/homefs1/newdir1
    mmafmctl: Performing failover to nfs://c26c3apv1/gpfs/homefs1/newdir1
    Fileset sw1 changed.
    mmafmctl: Failover in progress. This may take while...
     Check fileset state or register for callback to know the completion status.
    
    
    Callback registered, logged into mmfs.log:
    Thu May 21 03:06:18.303 2015: [I] Calling User Exit Script callback7: event
    afmManualResyncComplete, Async command recovery.sh
    
    
    # mmafmctl fs1 getstate -j sw1
    Fileset Name Fileset Target                       Cache State Gateway Node Queue Length Queue numExec
    ------------ --------------                       ----------- ------------ ------------ -------------
    sw1          nfs://c26c3apv1/gpfs/homefs1/newdir1 Active      c26c2apv1    0            5250
  6. Changing target of SW fileset:
    Changing to another NFS server in the same home cluster using --target-only option:
    
    # mmafmctl fs1 failover -j sw1 --new-target=c26c3apv2:/gpfs/homefs1/newdir1 --target-only
    mmafmctl: Performing failover to nfs://c26c3apv2/gpfs/homefs1/newdir1
    Fileset sw1 changed.
    
    
    # mmafmctl fs1 getstate -j sw1
    Fileset Name Fileset Target                       Cache State Gateway Node Queue Length Queue numExec
    ------------ --------------                       ----------- ------------ ------------ -------------
    sw1          nfs://c26c3apv2/gpfs/homefs1/newdir1 Active      c26c2apv1    0            5005
  7. metadata population using prefetch:
    # mmafmctl fs1 getstate -j ro
    Fileset Name Fileset Target                    Cache State Gateway Node Queue Length Queue numExec
    ------------ --------------                    ----------- ------------ ------------ -------------
    ro           nfs://c26c3apv1/gpfs/homefs1/dir3 Active      c26c2apv2    0            7
    
    
    List Policy:
    RULE EXTERNAL LIST 'List' RULE 'List' LIST 'List' WHERE PATH_NAME LIKE '%'
     
    Run the policy at home:
    mmapplypolicy /gpfs/homefs1/dir3 -P px -f px.res -L 1 -N mount -I defer
    
    Policy created this file, this should be hand-edited to retain only file names.
    
    This file can be used at the cache to populate metadata.
    
    # mmafmctl fs1 prefetch -j ro --metadata-only -list-file=px.res.list.List
    mmafmctl: Performing prefetching of fileset: ro
    
    
    Prefetch end can be monitored using this event:
    Thu May 21 06:49:34.748 2015: [I] Calling User Exit Script prepop: event afmPrepopEnd,
    Async command prepop.sh.
    
     
    The statistics of the last prefetch command is viewed by:
    mmafmctl fs1 prefetch -j ro
    Fileset Name Async Read (Pending) Async Read (Failed) Async Read (Already Cached) Async Read (Total) Async Read (Data in Bytes)
    ------------ -------------------- ------------------  --------------------------- ------------------ --------------------------
    ro           0                    1                   0                            7                 0
  8. Prefetch of data using --home-list-file option:
    # cat /lfile1
    /gpfs/homefs1/dir3/file1
    /gpfs/homefs1/dir3/dir1/file1
    # mmafmctl fs1 prefetch -j ro --home-list-file=/lfile1
    mmafmctl: Performing prefetching of fileset: ro
    
    # mmafmctl fs1 prefetch -j ro
    Fileset Name Async Read (Pending) Async Read (Failed) Async Read (Already Cached) Async Read (Total) Async Read (Data in Bytes)
    ------------ -------------------- ------------------  --------------------------- ------------------ --------------------------
    ro           0                    0                   0                           2                  122880
  9. Prefetch of data using --home-inode-file option:
    Inode file is created using the above policy at home, and should be used as such without
    hand-editing.
    
    List Policy:
    RULE EXTERNAL LIST 'List' RULE 'List' LIST 'List' WHERE PATH_NAME LIKE '%'
    
    Run the policy at home:
    # mmapplypolicy /gpfs/homefs1/dir3 -P px -f px.res -L 1 -N mount -I defer
    
    # cat /lfile2
    113289 65538 0 -- /gpfs/homefs1/dir3/file2
    113292 65538 0 -- /gpfs/homefs1/dir3/dir1/file2
    # mmafmctl fs1 prefetch -j ro --home-inode-file=/lfile2
    mmafmctl: Performing prefetching of fileset: ro
     mmafmctl fs1 prefetch -j ro
    Fileset Name Async Read (Pending) Async Read (Failed) Async Read (Already Cached) Async Read (Total) Async Read (Data in Bytes)
    ------------ -------------------- ------------------  --------------------------- ------------------ --------------------------
    ro           0                    0                   2                           2                  0
  10. Using --home-fs-path option for a target with NSD protocol:
    # mmafmctl fs1 getstate -j ro2 
    Fileset Name Fileset Target              Cache State Gateway Node Queue Length Queue numExec
    ------------ --------------              ----------- ------------ ------------ -------------
    ro2          gpfs:///gpfs/remotefs1/dir3 Active      c26c4apv1    0            7
    
    
    # cat /lfile2
    113289 65538 0 -- /gpfs/homefs1/dir3/file2
    113292 65538 0 -- /gpfs/homefs1/dir3/dir1/file2
    # mmafmctl fs1 prefetch -j ro2 --home-inode-file=/lfile2 --home-fs-path=/gpfs/homefs1/dir3
    mmafmctl: Performing prefetching of fileset: ro2
    
    # mmafmctl fs1 prefetch -j ro2
    Fileset Name Async Read (Pending) Async Read (Failed) Async Read (Already Cached) Async Read (Total) Async Read (Data in Bytes)
    ------------ -------------------- ------------------  --------------------------- ------------------ --------------------------
    ro2          0                    0                   0                           2                  122880
  11. Manually evicting using safe-limit and filename parameters:
    # ls -lis /gpfs/fs1/ro2/file10M_1
    12605961 10240 -rw-r--r-- 1 root root 10485760 May 21 07:44 /gpfs/fs1/ro2/file10M_1
    
    # mmafmctl fs1 evict -j ro2 --safe-limit=1 --filter FILENAME=file10M_1
    
    # ls -lis /gpfs/fs1/ro2/file10M_1
    12605961 0 -rw-r--r-- 1 root root 10485760 May 21 07:44 /gpfs/fs1/ro2/file10M_1
  12. IW Failback:
    # mmafmctl fs1 getstate -j iw1
    Fileset Name Fileset Target                    Cache State Gateway Node Queue Length Queue numExec
    ------------ --------------                    ----------- ------------ ------------ -------------
    iw1          nfs://c26c3apv1/gpfs/homefs1/dir3 Active      c25m4n03     0            8
     
    
    # touch file3 file4
    
    # mmafmctl fs1 getstate -j iw1
    Fileset Name Fileset Target                    Cache State Gateway Node Queue Length Queue numExec
    ------------ --------------                    ----------- ------------ ------------ -------------
    iw1          nfs://c26c3apv1/gpfs/homefs1/dir3 Dirty       c25m4n03     2            11
    
     
    Unlink IW fileset feigning failure:
    # mmunlinkfileset fs1 iw1 -f
    Fileset iw1 unlinked.
    
    Write from IW home, assuming applications failed over to home:
    Thu May 21 08:20:41 4]dir3# touch file5 file6
    Relink IW back on the cache cluster, assuming it came back up:
    # mmlinkfileset fs1 iw1 -J /gpfs/fs1/iw1
    Fileset iw1 linked at /gpfs/fs1/iw1
    
    Run failback on IW:
    # mmafmctl fs1 failback -j iw1 --start --failover-time='May 21 08:20:41'
    
    # mmafmctl fs1 getstate -j iw1
    Fileset Name Fileset Target                    Cache State    Gateway Node Queue Length Queue numExec
    ------------ --------------                    -----------    ------------ ------------ -------------
    iw1          nfs://c26c3apv1/gpfs/homefs1/dir3 FailbackInProg c25m4n03     0            0
    
    # mmafmctl fs1 failback -j iw1 -stop
    
    # mmafmctl fs1 getstate -j iw1
    Fileset Name Fileset Target                    Cache State Gateway Node Queue Length Queue numExec
    ------------ --------------                    ----------- ------------ ------------ -------------
    iw1          nfs://c26c3apv1/gpfs/homefs1/dir3 Active      c25m4n03     0            3

See also

See also the following IBM Spectrum Scale: Advanced Administration Guide topics:
  • The chapter about AFM, for detailed descriptions of the AFM functions
  • The chapter about AFM disaster recovery, for detailed descriptions of the AFM disaster recovery functions.

Location

/usr/lpp/mmfs/bin