Data migration to an AFM fileset by using GPFS/NSD protocol

AFM can migrate data from an old IBM GPFS file system to latest IBM Storage Scale AFM fileset on the same cluster using GPFS/NSD backend. Data migration using AFM outlines the process of migrating data from old IBM Storage Scale GPFS file system to an AFM fileset belongs to the latest GPFS cluster using the NSD backend (remote cluster fs mount). The migration is useful while upgrading hardware or buying a new system where the data from old hardware must be moved to new hardware. Minimizing the application downtime and moving the data with attributes are the key goals of migration.

Data from source can be migrated by using GPFS (NSD multi-cluster) based protocol. For the migration, only AFM read-only (RO) mode and AFM local-update (LU) mode filesets are supported.

Prerequisites

  • Ensure that the data source or the old GPFS file system are remote mounted on the newer an IBM Storage Scale cluster.
  • Ensure that the target or the new cluster is running IBM Storage Scale 5.0.4.3 or later.
  • On the cache site, create a GPFS file system and mount it on all the nodes.
  • Assign the gateway node role to some of the nodes in the cluster.
    /usr/lpp/mmfs/bin/mmchnode --gateway -N <node1>[,<node2>]
  • Ensure that gateway node is an individual node that does not have any other designation or role such as protocol, manager.

Parameters

  1. Disable auto eviction on the RO mode fileset.
    /usr/lpp/mmfs/bin/mmchfileset Device fileset -p afmEnableAutoEviction=no
  2. Enable the authorization support on the file system to either POSIX, NFS, or all.
  3. AFM recommended to set authorization support to all.
    /usr/lpp/mmfs/bin/mmchfs fs1 -k all
  4. Disable display of home snapshots at AFM fileset.
    /usr/lpp/mmfs/bin/mmchconfig afmShowHomeSnapshot=no -i

Planning

    • Prepare the old file system ( old system) to make it available/remote mounted for AFM fileset on the new system. This site is called the home site (old system).
    • Prepare a new hardware (system) that runs IBM Storage Scale AFM. This is called the cache site (new system), and data is migrated from an old system to a new system.
    • Same steps can be used to migrate data from an old file system to latest file system which belongs to the same IBM Storage Scale cluster.
    • Set up the new system and configure an AFM RO-mode fileset relationship between the old system and the new system.
    • Migrate data from the old system to the new system recursively by using the latest prefetch options.
    • Convert the AFM RO-mode fileset to an AFM LU-mode fileset.
    • Move the application from the old system to the new system (AFM LU-mode fileset). Take downtime for the application cutover. During this phase, it is recommended that the old system must not modify the data.
    • Prefetch the remaining data. If the data is not available at the new system, AFM pulls the data on demand for the application during the final prefetch from the old system.
    • Prepare downtime for the application. Disconnect the old system and disable the AFM relationship. This step is optional, and the AFM relationship can remain in the stopped state until a planned downtime.

Procedure

On home (old system)
  1. Verify the source cluster is up, and running and remote fs mounted path is available on all nodes.
  2. If the home (old system) site is running IBM Storage Scale 4.1 version or later, issue the following command:
    /usr/lpp/mmfs/bin/mmafmconfig enable /gpfs/fs1/export1
  3. If the source node or cluster is running on IBM® GPFS 3.4 or 3.5, issue the following command:
    /usr/lpp/mmfs/bin/mmafmhomeconfig enable /gpfs/fs1/export1
Cache site (target) setup
  1. Ensure that the target cluster is up and running. The gateway role is already provisioned to a few nodes.
  2. Configure a remote mount/multi-cluster file system on the cache site(new system). The remote file system must be mounted on all the nodes on the new system.
  3. Enure that file system is up and mounted on all nodes.
    mmlsfs fs1 -T
    where, rfs1 is the remote mounted old file system available on the new system. This remote file system is used as an afmTarget to pull the data.

    A sample output is as follows:

    flag value description
    ------------------- ------------------------ -----------------------------------
    -T /gpfs/fs1 Default mount point
    mmlsmount fs1 -L
    File system fs1 is mounted on 3 nodes:
    192.168.10.100 node1
    192.168.10.101 node2
    192.168.10.102 node3
    mmlsmount rfs1 -L
    File system rfs1 is mounted on 3 nodes:
    192.168.10.100 node1
    192.168.10.101 node2
    192.168.10.102 node3
  4. Create a Read-Only AFM fileset on the cache site by pointing to the export from home site and link it.
    mmcrfileset fs1 ro1 -p afmMode=ro,afmTarget=gpfs:///gpfs/rfs1/export1,afmAutoEviction=no --inode-space new
    mmlinkfileset fs1 ro1 -J /gpfs/fs1/ro1
  5. Check the fileset.
    mmlsfileset fs1 ro1 -X

    A sample output is as follows:

    Filesets in file system 'fs1':
    Attributes for fileset ro1:
    ============================
    Status Linked
    Path /gpfs/fs1/ro1
    Id 11
    Root inode 6291459
    Parent Id 0
    Created Fri Nov 8 02:44:31 2024
    Comment
    Inode space 6
    Maximum number of inodes 100352
    Allocated inodes 100352
    Permission change flag chmodAndSetacl
    IAM mode off
    afm-associated Yes
    Permission inherit flag inheritAclOnly
    Target gpfs:///gpfs/rfs1/export1
    Mode read-only
    File Lookup Refresh Interval 30 (default)
    File Open Refresh Interval 30 (default)
    Dir Lookup Refresh Interval 60 (default)
    Dir Open Refresh Interval 60 (default)
    Async Delay disable
    Last pSnapId 0
    Display Home Snapshots no (default)
    Number of Gateway Flush Threads 4
    Prefetch Threshold 0 (default)
    Eviction Enabled no
    IO Flags 0x0
    IO Flags2 0x0
  6. (Optional) Create and link dependent filesets in the AFM RO-mode fileset. The dependent filesets creation is optional for the following reasons:
    • Home data is stored in a dependent fileset, and you want to map the migrate data into same structure in the cache AFM fileset.
    • A dependent fileset is not created on the cache site, AFM creates directories in place of the dependent fileset linked path and store all data in the directory mapped to the source or home path. Therefore the creation of a dependent fileset in the AFM RO-mode fileset is optional.
  7. Complete the following steps, to create a dependent fileset:
    1. Create the AFM RO-mode fileset.
      mmafmctl fs1 stop -j ro1
    2. Create dependent filesets.
      mmcrfileset fs1 dep1 --inode-space ro1
    3. Link the filesets in the AFM RO-mode fileset.
      mlinkfileset fs1 dep1 -J /gpfs/fs1/ro1/dep1
    4. Start the AFM RO-mode fileset
      mmafmctl fs1 start -j ro1
    5. Check whether the fileset is active.
      ls -altrish /gpfs/fs1/ro1
      /usr/lpp/mmfs/bin/mmafmctl fs1 getstate -j ro1