Migration from the legacy hardware by using AFM
This use case is about migration from the legacy hardware by using AFM.
The following diagram illustrates the use case.

Use case requirement
Filesystem migration is a process of migrating data from any legacy storage appliance to a GPFS cluster through the NFS protocol. The migration is useful while upgrading hardware or buying a new system where the data from old hardware must be moved to new hardware. Minimizing the application downtime and moving the data with attributes are the key goals of migration.
The target or the new hardware must be running GPFS 4.1 or later. The data source must be an NFS v3 export and can either be a GPFS or a non-GPFS source. Any source running a version earlier than GPFS 3.4 is equivalent to a non-GPFS data source. The UID namespace between the source and the target must be maintained.
The migration can be incremental or progressive, depending on how it is performed. The old storage appliance can be disconnected once the migration is complete.
The migration does not pull the file system–specific parameters such as quotas, snapshots, file system–level tuning parameters, policies, fileset definitions, encryption keys, and dmapi parameters. On a GPFS data source, AFM moves all the user-extended attributes and ACLs, and the file sparseness is maintained. On a non-GPFS data source, the POSIX permissions and the ACLs are migrated but not the NFS V4/CIFS ACLs, non-posix file system attributes, and sparseness.
AFM migrates the data as root, bypassing the permission checks. Pre-allocated files at the home source are not maintained. That is, only the actual data blocks are migrated depending on the file size. If all objects in the fileset are not to be migrated and the fileset to be migrated has orphans, the administrator must clean these inodes before starting the migration.
Incremental migration
- The NFS shares the data that has to be migrated must be identified on the source cluster or legacy NAS storage. All identified shares must be exported through NFS.
- If the source is running GPFS 4.1 or
later, run mmafmconfig enable command on each of the NFS shares. If the version
is GPFS 3.5 or 3.4, run
mmafmhomeconfig enable.
Figure 2. Use case diagram 2 - migration from the legacy hardware - An AFM RO fileset must be created at the GPFS AFM cluster for each NFS share. The afmEnableAutoEviction parameter must be disabled for all the RO filesets, to inadvertently avoid eviction. Preallocate inodes that are equal to or greater than the inodes present at the source because those many files are existing. The maximum number of inodes must also be set. Parallel data transfers can be configured between the source NFS servers and the target. See Parallel data transfers feature.
- For each of the RO filesets, a list of all the files at the source must be created. If the source is GPFS, a simple policy LIST RULE can be used to create the list. This policy can be tuned to eliminate types that are not supported by AFM. If the source is not GPFS, such a list can be created by using find, ls, -R, or any similar tool. The list must then be manually edited to eliminate the unsupported types.
- The mmafmctl prefetch command with the --metadata-only option must be run on each of the RO filesets to populate the directory tree by using the list file generated for the respective filesets. This step is optional. A user can directly prefetch files without populating the directory tree.
- Run prefetch for each RO fileset by using the respective list files that are
previously created. Prefetch must complete all filesets before proceeding.
Callbacks can be added to indicate when a scheduled prefetch task is completed by using the afmPrepopEnd event. For details to add a callback, see the mmaddcallback command description. After the prefetch task is completed, the migration of the given data is complete.
- Stop applications at the data source.
- To sync up the latest application changes, steps 4 through 6 must be repeated. The mmafmctl prefetch –metadata-only command syncs up new files and directories that were created by the application after the previous migration.
- AFM filesets can be converted to SW or IW filesets with a new target, or to regular GPFS filesets with caching disabled.
- Migration is complete. You might choose to run md5sum or any other third-party utility to check consistency of the migrated files.
- Applications can be started on the target system. The old system is ready to be decommissioned.
Progressive migration
- The NFS shares the data that must be migrated and identified on the source cluster or Legacy NAS Storage. All identified shares must be exported via NFS.
- If the source is running GPFS 4.1 or later, run mmafmconfig enable on each of the NFS shares. If the version is GPFS 3.5 or 3.4, run mmafmhomeconfig enable.
- An AFM LU fileset must be created at the GPFS AFM cluster for each NFS share. To avoid eviction, the afmAutoEviction parameter must be disabled for all the LU filesets. Preallocate inodes that are equal to or greater than the inodes present at source because those many files are existing. The maximum number of inodes must be set accordingly. Parallel I/O can be configured between the source NFS servers and the target.
- Applications must be shut down at the source.
- For each LU fileset, a list of all the files at the source must be created. If the source is GPFS, a simple policy LIST RULE can be used to create the list. This policy can be tuned to eliminate types that are not supported by AFM. If the source is not GPFS, such a list can be created by using find, ls, -R, or a similar tool. The list must be edited manually to eliminate the unsupported types.
- The mmafmctl prefetch command with --metadata-only option
must be run on each of the LU filesets to populate the directory tree using the respective list file
created in the preceding step.
Figure 3. Use case diagram 3 - migration from the legacy hardware - Applications are started at the target GPFS cluster.
- While applications are running at the target, the mmafmctl prefetch command can be issued to run on each of the LU filesets by using the respective list files created in step 5. Prefetch must complete on all filesets before proceeding. Callbacks can be added to indicate when a scheduled prefetch task is completed by using the afmPrepopEnd event. For details on how to add a callback, see the mmaddcallback command description. After the prefetch task completes, migration of the given data is complete.
- The migration is complete when prefetch is successful on all the filesets. The old system is ready to be decommissioned.
- The GPFS AFM filesets can be converted to regular GPFS filesets with caching disabled or can continue as LU filesets. If the filesets continue in the LU mode, disable the refresh intervals, such as File Lookup Refresh Interval, File Open Refresh Interval, Dir Lookup Refresh Interval, and Dir Open Refresh Interval. This avoids the refresh on these LU filesets and communicating with the home or the old hardware, because applications have moved and there is no need to continuously update with the home cluster.
If data is managed by IBM Spectrum Protect at the data source, migration drives data recall from the tape onto the source and then to the target system. If the .ptrash directory is present in the filesets after conversion to normal GPFS filesets, the filesets can be removed after checking that the filesets do not contain data used by any other users.