Migrating storage partitions between systems

IBM Storage Virtualize supports the non-disruptive migration of storage partitions between systems, which enables a partition to be migrated from one Storage Virtualize system to another without any application downtime.

Migrating storage partitions can be used for the following cases:
  • Migrating to a new storage array before decommissioning the original system.
  • Load-balancing by migrating storage partitions from overloaded systems to other systems.
Migration enables relocation of storage partitions from a source system to a target system location. As a consequence of which, the following flow of events occurs:
  • All resources that are associated with the storage partition are created on the migration target system, and the copy process starts from the source system to the target system.
  • Host I/O is served from the migration target storage system once the synchronization process is complete.
  • At the end of the migration process, the storage partition and all of its resources are removed from the source system.

A storage partition that has replication configured for DR can be migrated.

If a storage partition contains volume groups with snapshot policies assigned, snapshot policies are created on the target system and assigned to volume groups as part of migration process.

Limits and restrictions

  • Draft storage partitions and storage partitions that are configured for policy-based high availability are not supported for storage partition migration.
  • Storage partition migration is supported only with Fibre Channel partnerships between systems.
  • Storage partition migration is supported only with Fibre Channel hosts.
  • Only one storage partition can be migrated at a time from a system. Any consecutive migrations that are attempted get queued, and are scheduled automatically as per the sequence of invocations when the earlier migrations complete.
  • If a system has storage partitions that are configured for high availability, partition migration can only be used with the HA partner system.

Prerequisites

Before you can use non-disruptive storage partition migration, ensure that the following prerequisites are met:
  1. Migrating a storage partition requires both Fibre Channel and IP connectivity between the source and target storage systems. The requirements are the same as that for configuring high availability (HA). Review the high availability requirements to ensure that the storage partition supports being migrated and the host operating systems support this feature. For more information, see Planning high availability.

    1. Make sure that both systems have their certificates added to the truststore of each other, with the REST API usage enabled. For more information, see System Certificates.
      Note: It is required that the time must be synchronized between the systems that are involved in partition migrations. The best way to achieve this is to configure each system to use a Network Time Protocol (NTP) service.
  2. Confirm that both systems are members of the same FlashSystem grid, or that neither system is a member of a FlashSystem grid, and that both systems meet the requirements for FlashSystem grid. For more information, see FlashSystem grid.
  3. The two systems must either have a partnership that is already configured or be zoned such that a partnership can be created by the migration process. Use partnership views on the GUI or CLI to validate the requirement.
  4. You must link storage pools between the systems. Establish a pool link between systems on the current source and target systems. For more information, see Pool links. To create linked child pools, use the Pools panel on the management GUI on the system that does not have the child pools created. Pool links can be configured between systems without partnership by using the chmdiskgrp command, but it will be in use only when there is partnership between systems.
  5. If a storage partition has replication that is configured for disaster recovery (DR), then the target system (used for the partition migration) and the DR-linked system must either have a partnership that is already configured or be zoned such that a partnership can be created between them by the migration process.

Procedure

Placement advisory

The FlashSystem grid GUI shows an AI-powered list of target storage systems for storage partition migration. This information is provided by IBM® Storage Insights, which analyzes the historical data of all systems to find the best placement for partitions in the FlashSystem grid.

Ensure that the following prerequisites are met:
  • All systems in the FlashSystem grid must be added to the same IBM Storage Insights tenant.
  • The system must have an IBM Storage Insights Pro license.
  • The IBM Storage Insights API key must be registered with the FlashSystem.
Using the GUI to migrate a storage partition

On the GUI, storage partition migration can be initiated from the Storage partitions page of the FlashSystem grid only. A storage partition can be migrated only to systems that are part of the same FlashSystem grid.

To migrate a storage partition, go to FlashSystem grid > Storage partitions and select Migrate partition. You can use the Partition placement option on the Storage partitions panel to begin AI-powered evaluation of systems.

If user intervention is needed to complete a partition migration, the Migration status link guides you to take the necessary action.
Figure 1. Storage partitions page of the FlashSystem grid.
Storage partitions page of FlashSystem grid
Using IBM Storage Insights to migrate a storage partition

You can initiate a storage partition migration for non-disaster recovery storage partitions directly from IBM Storage Insights. Before you begin, use the storageinsightscontrolaccess option in the chsystem command to enable control access on the FlashSystem. For more information about configuring and using IBM Storage Insights for partition migration, see the IBM Storage Insights Documentation.

Using the CLI to migrate a storage partition
  1. Run the chpartition command with the -location option to migrate the storage partition to a different system.
  2. The following example initiates a migration of storage partition to the designated location system:

    chpartition -location <remote_system> mypartition1
    Note: If there are two or more systems with the same name, then specify remote system ID instead of remote system name while initiating the migration.
  3. To check the migration status, run lspartition.
    Note: You can monitor the progress of the migration, including the amount of data remaining to be copied and the estimated time of completion, by using the lsvolumegroupreplication command.
  4. Once the storage partition's data and configuration has been migrated to the target storage system, you must confirm that the affected hosts have discovered paths to the volumes in the target storage system.
    Note: The event for confirmation of path discoveries by the mapped hosts does not get raised if all the hosts that are mappedthat are to a partition are known to rescan the storage automatically after regular intervals, and the system detects the storage discoveries driven by the host. These host attributes can be managed by using the autostoragediscovery option in mkhost and chhost CLIs.
  5. To complete the migration, remove the copy of the data and configuration of the storage partition from the source storage system. Before committing the migration, verify the host's multipath configurations, ensure path availability and redundancies, and assess IO performance at the target system.
    Note: The migration can be canceled and the partition rolled back to its original system before fixing the event if you observe any issues with the migrated storage partition.
    The rollback operation needs a confirmation at the source system for either continuing or canceling the migration operation. You can continue the migration if you are able to resolve the issues at the target system. To continue the migration operation without continuing with rollback, run the following at source system:
    chpartition -migrationaction fixeventwithchecks <partition_id/name>
    To continue the rollback and cancel the partition migration, run the following at source system:
     chpartition -override -location <source_system> <partition_id>
  6. If the storage partition has replication configured for DR, you might need to temporarily pause or resume DR replication.
    Note: When you migrate a storage partition configured for DR, if the partition contains recovery volume groups, DR replication must be paused temporarily by setting all recovery volume groups in the partition as independent. Before committing or aborting a migration, you can restart replication for DR on the volume groups that were made independent as part of the migration process. You must restart replication from the system that you want to use as the production copy for each volume group. To restart replication in the same direction as before the migration was started, with the volume groups on the local system configured as recovery copies, you must log into the remote system (in the DR-linked partition) and restart replication from that system.
  7. If the storage partition on the source system contains volume groups with snapshot policies assigned, go to the target storage system and create snapshot policies and assign them to volume groups in the partition on the target system.
  8. An informational event at the target storage system marks the completion of the storage partition migration.

    Note:

    You can monitor the migration by using migration_status field that is shown by lspartition command indicating that there is no migration activity active or queued for that storage partition.

    If you specify a different target location in an ongoing storage partition migration, the ongoing migration is aborted and the partition migration to the new target location is queued. You can use the -override option in the chpartition command to abort an ongoing partition migration and specify a new target location.

    You can abort an active or queued migration operation by specifying the source system location as the specified location. This needs an -override option since the abort operations remove the data that is replicated until that moment.

    .

Completing the migration

If any user intervention is needed to complete a partition migration, click the status links in the Migration status column, which guides you to take the necessary action. You can do the following tasks:

Stop disaster recovery (DR)
When you migrate a storage partition configured for DR, if the partition contains recovery volume groups, DR replication must be paused temporarily by setting all recovery volume groups in the partition as independent.
Automated host rescan
Hosts mapped to a partition are enabled to rescan the storage automatically at regular intervals and the system detects the discovery status and the I/O activity. Host rescan of the storage systems recognizes and updates all the paths.

To initiate a rescan of hosts on the storage systems, go to Storage partition > Host. Then return to the migration wizard and click Mark as complete.

Note:

The event for confirmation of path discoveries by mapped hosts is not triggered if all hosts, which are associated with a storage partition, automatically rescan the storage at regular intervals. This behavior is controlled by host attributes that can be configured by using the autostoragediscovery option in mkhost and chhost commands.

Automated host actions can be performed by the following automation types:
Restart disaster recovery (DR)
Before committing or aborting a migration, you can restart replication for DR on the volume groups that were made independent as part of the migration process. You must restart replication from the system that you want to use as the production copy for each volume group. To restart replication in the same direction as before the migration was started, with the volume groups on the local system that is configured as recovery copies, you must log in to the remote system (in the DR-linked partition) and restart replication from that system.
Roll back a migration
The migrated storage partition that serves I/Os from the target (after host health checks at the source) can be rolled back to its source location. You can roll back a migration from the target system.
Abort a migration
To discontinue the migration process, you can abort the migration from the source system.
Commit a migration
After migration is almost complete, the configuration and data is copied to the target system. You can commit the migration to finalize the changes. Committing the migration on the target system completes the migration process.

Rolling back a migration

Rollback is needed if the migrated storage partition does not perform as expected on the target system after migration. The rollback operation switches the I/O operations to the storage partition at the source location.

The migrated storage partition serving I/Os from the target location (after you check and fix the host health at the source) can be rolled back to its source location. Use the -override option in the chpartition command and specify the source system to rollback a migration from the target system.

If the rollback operation fails or if you do not want to continue with the rollback, then the ongoing rollback operation can be canceled by using the chpartition command. After canceling the rollback, migration resumes from the point at which the rollback operation was triggered.