Data migration to an AFM file system by using GPFS/NSD protocol
AFM provides migration of data from an old IBM GPFS file system to the latest IBM Storage Scale AFM enabled file system on the same cluster by using the GPFS/NSD protocol.
Data migration by using AFM outlines the process of migrating data from old IBM Storage Scale GPFS file system to an AFM-enabled GPFS file system that belongs to the latest GPFS cluster by using the NSD backend (remote cluster fs mount). The migration is useful while upgrading hardware or buying a new system where the data from old hardware must be moved to new hardware. Minimizing the application downtime and moving the data with attributes are the key goals of migration.
Data from source can be migrated by using GPFS (NSD multi-cluster) based protocol, that is remote cluster is configured between the old system and new system.
For the migration, only AFM read-only (RO) mode and AFM local-update (LU) mode enabled file systems are supported.
Prerequisites
- Ensure that the data source or the old GPFS file system needs to be remote mounted on the newer an IBM Storage Scale cluster.
- Ensure that the target or the new cluster is running IBM Storage Scale5.0.4.3 or later.
- At the cache site, create a GPFS file system where AFM parameter is enabled, mount it on all the nodes.
- Assign the gateway node role to some of the nodes in the cluster. For example, assign one or
more node as a gateway by running the following
command:
/usr/lpp/mmfs/bin/mmchnode --gateway -N <node1>[,<node2>]
- Ensure that the gateway node is an individual node, which does not have any other designation/role such as protocol, manager.
- Create an AFM enabled Read Only (RO) mode file system at the cache which target is pointing to the old GPFS remote mounted path. The home export path must be mounted and accessible on all the cache gateway nodes.
- Configure the user ID namespace between the source site and the target site identically on the cache.
- Provision the quota at the cache level as per requirements.
- Disable the eviction at the cache level.
- Disable the display of home snapshots for the AFM file system.
Parameters
- Disable auto eviction on the RO-mode
fileset.
/usr/lpp/mmfs/bin/mmchfileset Device fileset -p afmEnableAutoEviction=no
- Enable the authorization support on the file system to either POSIX, NFS, or all.
- AFM recommended to set authorization support to
all.
/usr/lpp/mmfs/bin/mmchfs fs1 -k all
- Disable display of home snapshots at AFM
fileset.
/usr/lpp/mmfs/bin/mmchconfig afmShowHomeSnapshot=no -i
Procedure
- Verify the source cluster is up, and running and remote file system mounted path is available on all nodes.
- If the home (old system) site is running IBM Storage Scale 4.1 version or later, issue
the following
command:
/usr/lpp/mmfs/bin/mmafmconfig enable /gpfs/fs1/export1
- If the source node or cluster is running on IBM® GPFS 3.4 or 3.5, issue the following
command:
/usr/lpp/mmfs/bin/mmafmhomeconfig enable /gpfs/fs1/export1
- Ensure that the target cluster is up and running. The gateway role is already provisioned to a few nodes.
- Configure a remote mount/multi-cluster file system on the cache site(new system). The remote file system must be mounted on all the nodes on the new system.
- Enure that file system is up and mounted on all
nodes.
where,mmlsfs fs1 -T
rfs1
is the remote mounted old file system available on the new system. This remote file system is used as anafmTarget
to pull the data.A sample output is as follows:
flag value description ------------------- ------------------------ ----------------------------------- -T /gpfs/fs1 Default mount point mmlsmount fs1 -L File system fs1 is mounted on 3 nodes: 192.168.10.100 node1 192.168.10.101 node2 192.168.10.102 node3 mmlsmount rfs1 -L File system rfs1 is mounted on 3 nodes: 192.168.10.100 node1 192.168.10.101 node2 192.168.10.102 node3
- Create a Read-Only AFM fileset on the cache site by pointing to the export from home site and
link it.
mmcrfileset fs1 ro1 -p afmMode=ro,afmTarget=gpfs:///gpfs/rfs1/export1,afmAutoEviction=no --inode-space new
mmlinkfileset fs1 ro1 -J /gpfs/fs1/ro1
- Check the fileset.
mmlsfileset fs1 ro1 -X
A sample output is as follows:
Filesets in file system 'fs1': Attributes for fileset ro1: ============================ Status Linked Path /gpfs/fs1/ro1 Id 11 Root inode 6291459 Parent Id 0 Created Fri Nov 8 02:44:31 2024 Comment Inode space 6 Maximum number of inodes 100352 Allocated inodes 100352 Permission change flag chmodAndSetacl IAM mode off afm-associated Yes Permission inherit flag inheritAclOnly Target gpfs:///gpfs/rfs1/export1 Mode read-only File Lookup Refresh Interval 30 (default) File Open Refresh Interval 30 (default) Dir Lookup Refresh Interval 60 (default) Dir Open Refresh Interval 60 (default) Async Delay disable Last pSnapId 0 Display Home Snapshots no (default) Number of Gateway Flush Threads 4 Prefetch Threshold 0 (default) Eviction Enabled no IO Flags 0x0 IO Flags2 0x0
- Check whether the file system is up and mounted on all
nodes.
A sample output is as follows:mmlsfs fs1 -T
flag value description ------------------- ------------------------ ----------------------------------- -T /gpfs/fs1 Default mount point mmlsmount fs1 -L File system fs1 is mounted on 3 nodes: 192.168.10.100 node1 192.168.10.101 node2 192.168.10.102 node3 mmlsmount rfs1 -L File system rfs1 is mounted on 3 nodes: 192.168.10.100 node1 192.168.10.101 node2 192.168.10.102 node3
- (Optional) Create and link dependent filesets in the AFM RO-mode fileset. The dependent filesets
creation is optional for the following reasons:
- Home data is stored in a dependent fileset, and you want to map the migrate data into same structure in the cache AFM fileset.
- A dependent fileset is not created on the cache site, AFM creates directories in place of the dependent fileset linked path and store all data in the directory mapped to the source or home path. Therefore the creation of a dependent fileset in the AFM RO-mode fileset is optional.
- Complete the following steps, to create a dependent fileset:
- Create the AFM RO-mode fileset.
mmafmctl fs1 stop -j ro1
- Create dependent filesets.
mmcrfileset fs1 dep1 --inode-space ro1
- Link the filesets in the AFM RO-mode
fileset.
mlinkfileset fs1 dep1 -J /gpfs/fs1/ro1/dep1
- Start the AFM RO-mode fileset
mmafmctl fs1 start -j ro1
- Check whether the fileset is
active.
ls -altrish /gpfs/fs1/ro1
/usr/lpp/mmfs/bin/mmafmctl fs1 getstate -j ro1
- Create the AFM RO-mode fileset.