Data migration to an AFM fileset by using the NFS protocol

AFM supports data migration from any third-party, legacy appliances, or IBM GPFS to the latest IBM Storage Scale AFM fileset by using the NFS protocol.

AFM migrates data from an IBM Storage Scale file system or any legacy storage appliance (non-GPFS) to an AFM fileset that belongs to an IBM Storage Scale cluster by using the NFS protocol. The migration is useful during hardware upgrade or to buy a new system where the data from old hardware must be moved to a new hardware. This migration minimizes the application downtime and migrates data with attributes.

For the migration, only AFM read-only (RO) mode and AFM local-update (LU) mode filesets are supported.

Prerequisites

  • Ensure that the data source or the old hardware can be an IBM Storage Scale cluster or a non-IBM Storage Scale setup.
  • The source cluster can export the source path by using NFSv3.
  • Ensure that the target or the new cluster is running IBM Storage Scale 5.0.4.3 or later.
  • At the cache site, create an IBM Storage Scale file system and mount it on all the nodes.
  • Assign the gateway node role some of the nodes in the cluster.
    /usr/lpp/mmfs/bin/mmchnode --gateway -N <node1>[,<node2>]
  • Ensure that the gateway node is an individual node, which any other designation or role such as protocol, manager is not assigned.
  • Create an AFM fileset Read Only (RO) mode on the cache where afmTarget points to the home NFS export path. The home export path must be accessible at all the cache gateway nodes.
  • Configure the user ID namespace between the source site and the target site identically.
  • Provision the quota at the cache fileset level as per requirements.
  • Disable eviction at the cache fileset level.
  • Disable display of home snapshots for AFM filesets.

Parameters

  • Enable the afmNFSVersion parameter at the cache site.
    /usr/lpp/mmfs/bin/mmchconfig afmNFSVersion=3 -i
  • If home (old system) is non-GPFS and required AFM to pull NFSv4 ACL from non-GPFS file system to the cache, enable the afmSyncNFSv4ACL parameter at the cluster level:
    /usr/lpp/mmfs/bin/mmchconfig afmSyncNFSv4ACL=yes -i
  • Enable the authorization support on the file system to POSIX, NFS, or all.
    /usr/lpp/mmfs/bin/mmchfs fs1 -k all
  • Provision the required inode numbers during the AFM fileset creation.
  • Disable the display of home snapshots on the AFM fileset.
    /usr/lpp/mmfs/bin/mmchconfig afmShowHomeSnapshot=no -i

Planning

Before the data migration, complete the following steps:
  • Prepare the old hardware (system) to export the data source. This site is called the home site (old system).
  • Prepare a new hardware (system) that runs IBM Storage Scale AFM. This is called the cache site (new system), and data is migrated from an old system to a new system.
  • If required, migrate data from a file system to another file system that belongs to the same IBM Storage Scale cluster.
  • Set up the new system and configure an AFM RO-mode fileset relationship between the old system and the new system.
  • Migrate data from the old system to the new system recursively by using the latest prefetch options.
  • Convert the AFM RO-mode fileset to an AFM LU-mode fileset.
  • Move the application from the old system to the new system (AFM LU-mode fileset). Take downtime for the application cutover. During this phase, it is recommended that the old system must not modify the data.
  • Prefetch the remaining data. If the data is not available at the new system, AFM pulls the data on demand for the application during the final prefetch from the old system.
  • Prepare downtime for the application. Disconnect the old system and disable the AFM relationship. This step is optional, and the AFM relationship can remain in the stopped state until a planned downtime.

Procedure

For home (old system)
  1. Verify the source cluster is up and running and path to be exported is available.
  2. Export the directory path which needs to be migrated.
For non-GPFS home site
If the home (old system) is a non-GPFS site, configure NFS exports of the data source path, for example, /home/userData by adding the following line in the /etc/exports file and restart NFS services. Each export entry must have a unique fileset ID (fsid).
  1. Update the /etc/exports file and add the following line:
    /home/userData GatewayIP/*(rw,nohide,insecure,no_subtree_check,sync,no_root_squash,fsid=101)
  2. Restart the NFS server.
    exportfs -ra or #systemctl restart nfs-server
For GPFS home site
    1. If the home (old system) is a GPFS site, complete the following steps:
      1. Export a fileset that contains the source data by using For more information about the NFS protocol use, see Non-GPFS home site.
      2. Update the /etc/exports file and add the following line:
        /gpfs/fs1/export1 GatewayIP/*(rw,nohide,insecure,no_subtree_check,sync,no_root_squash,fsid=101)
    2. If the home (old system) site is running IBM Storage Scale 4.1 or later, issue the following command:
      /usr/lpp/mmfs/bin/mmafmconfig enable /gpfs/fs1/export1
    3. If the source node or the cluster is running on IBM® GPFS 3.4 or 3.5, issue the following command:
      /usr/lpp/mmfs/bin/mmafmhomeconfig enable /gpfs/fs1/export1
    4. Ensure that the NFS exports from the old system are readable at the AFM cache cluster so that the AFM gateway can mount the NFS exports by using NFSv3 and read data from the exports for the migration.
    5. Restart the NFS server.
      exportfs -ra or #systemctl restart nfs-server
On the cache (new system)
  1. Ensure that the cluster is up and running. Gateways nodes are already provisioned in the cluster.
  2. Ensure that File system is up and mounted on all nodes.
    mmlsfs fs1 -T
    A sample output is as follows:
    flag value description
    ------------------- ------------------------ -----------------------------------
    -T /gpfs/fs1 Default mount point
    mmlsmount fs1 -L
    File system fs1 is mounted on 3 nodes:
    192.168.10.100 node1
    192.168.10.101 node2
    192.168.10.102 node3
  3. Create an RO-mode fileset.
    mmcrfileset fs1 ro1 -p afmMode=ro,afmTarget=home1:/gpfs/fs1/export1,afmAutoEviction=no --inode-space new --inode-limit 100352:100352
  4. Link the fileset on the cache (the new system) by pointing to the export from the home site (old system).
    mmlinkfileset fs1 ro1 -J /gpfs/fs1/ro1
  5. Check the fileset.
    mmlsfileset fs1 ro1 -X
    A sample output is as follows:
    Filesets in file system 'fs1':
    Attributes for fileset ro1:
    ============================
    Status Linked
    Path /gpfs/fs1/ro1
    Id 11
    Root inode 6291459
    Parent Id 0
    Created Fri Nov 8 02:44:31 2024
    Comment
    Inode space 6
    Maximum number of inodes 100352
    Allocated inodes 100352
    Permission change flag chmodAndSetacl
    IAM mode off
    afm-associated Yes
    Permission inherit flag inheritAclOnly
    Target nfs://home1/gpfs/fs1/export1
    Mode read-only
    File Lookup Refresh Interval 30 (default)
    File Open Refresh Interval 30 (default)
    Dir Lookup Refresh Interval 60 (default)
    Dir Open Refresh Interval 60 (default)
    Async Delay disable
    Last pSnapId 0
    Display Home Snapshots no (default)
    Number of Gateway Flush Threads 4
    Prefetch Threshold 0 (default)
    Eviction Enabled no
    IO Flags 0x0
    IO Flags2 0x0
  6. (Optional) Create and link dependent filesets in the AFM RO-mode fileset. The dependent filesets creation is optional for the following reasons:
    • Home data is stored in a dependent fileset, and you want to map the migrate data into same structure in the cache AFM fileset.
    • A dependent fileset is not created on the cache site, AFM creates directories in place of the dependent fileset linked path and store all data in the directory mapped to the source or home path. Therefore the creation of a dependent fileset in the AFM RO-mode fileset is optional.
  7. Complete the following steps, to create a dependent fileset:
    1. Create the AFM RO-mode fileset.
      mmafmctl fs1 stop -j ro1
    2. Create dependent filesets.
      mmcrfileset fs1 dep1 --inode-space ro1
    3. Link the filesets in the AFM RO-mode fileset.
      mlinkfileset fs1 dep1 -J /gpfs/fs1/ro1/dep1
    4. Start the AFM RO-mode fileset
      mmafmctl fs1 start -j ro1
    5. Check whether the fileset is active.
      ls -altrish /gpfs/fs1/ro1
      /usr/lpp/mmfs/bin/mmafmctl fs1 getstate -j ro1