Migrating storage partitions between systems

IBM Storage Virtualize supports the non-disruptive migration of storage partitions between systems. This enables a partition to be migrated from one Storage Virtualize system to another without any application downtime.

Migrating storage partitions can be used for the following cases:
  • Migrating to a new storage array before decommissioning the original system.
  • Load-balancing by migrating storage partitions from overloaded systems to other systems.
Migration enables relocation of storage partitions from a source system to a target system location. As a consequence of which, the following flow of events occurs:
  • All the resources associated with the storage partition are created on the migration target system, and the copy process starts from the source system to the target system.
  • Host I/O is served from the migration target storage system once the synchronization process is complete.
  • At the end of the migration process, the storage partition and all of its resources are removed from the source system.

For more information, see chpartition command to automate the procedure of migration of storage partitions. This automates intermediate steps, such as setting up Fibre Channel partnerships between the systems or establishing a policy-based high availability replication policy.

Limits and restrictions

  • Draft storage partitions and storage partitions configured for policy-based high availability are not supported for storage partition migration.
  • Storage partition migration is supported only with Fibre Channel partnerships between systems.
  • Storage partition migration is supported only with Fibre Channel hosts.
  • Only one storage partition can be migrated at a time from a system. Any consecutive migrations that are attempted get queued, and are scheduled automatically as per the sequence of invocations when the earlier migrations complete.
  • If a system has storage partitions configured for high availability, partition migration can only be used with the HA partner system.

Prerequisites

Before you can use non-disruptive storage partition migration function, ensure that the following prerequisites are met:
  1. Migrating a storage partition requires both Fibre Channel and IP connectivity between the source and target storage systems. The requirements are the same as that for configuring high availability (HA). Review the high availability requirements to ensure that the storage partition supports being migrated and the host operating systems support this feature. For details, see Planning high availability.

    1. Make sure that both systems have their certificates added to the truststore of each other, with the REST API usage enabled. For more information, see mktruststore .
      Note: It is required that the systems that are involved in storage partition migrations, the time must be synchronized between the systems. The best way to achieve this is to configure each system to use a Network Time Protocol (NTP) service.
  2. Confirm that both systems are members of the same FlashSystem grid, or that neither system is a member of a FlashSystem grid, and that both systems meet the requirements for FlashSystem grid. For more details, see FlashSystem grid.
  3. Use the lspartnershipcandidate command and make sure that both the source and the target systems are correctly zoned and are visible to each other. For more information, see lspartnershipcandidate command. Alternatively, if the target system is already part of the partnership, use the lspartnership command to check the partnership status between the source and the specific system.
  4. Although the systems are visible to each other, an identified system can have multiple suitable storage pools that are able to host the migrated storage partition. Establish a pool link between systems on the current source and target systems. For more information, see chmdiskgrp command.

Procedure

  1. Run the chpartition command with the -location option to migrate the storage partition to a different system.
  2. The following example initiates a migration of storage partition to the designated location system:

    chpartition -location <remote_system> mypartition1
    Note: If there are two or more systems with the same name, then specify remote system ID instead of remote system name while initiating the migration.
  3. To check the migration status, run lspartition.
    Note: You can monitor the progress of the migration, including the amount of data remaining to be copied, using the lsvolumegroupreplication command.
  4. Once the storage partition's data and configuration has been migrated to the target storage system, you must confirm that the affected hosts have discovered paths to the volumes in the target storage system.
  5. Another event prompts the storage administrator to remove the copy of the data and configuration of the storage partition from the source storage system to complete the migration.
    Note: The migration can be canceled and the partition rolled back to its original system before fixing the event if the administrator observes any issues with the migrated storage partition.
  6. An informational event at the target storage system marks the completion of the storage partition migration.

    Note:

    You can monitor the migration using migration_status field that is shown by lspartition command indicating that there is no migration activity active or queued for that storage partition.

    A ongoing storage partition migration can be aborted by specifying with a new migration location that uses “-override” option. The migration to the new location gets queued behind existing queued migrations, if any. For more information, see chpartition.An active or queued migration operation gets aborted when its source system location is provided as a specified location. This needs an -override option since the abort operations remove the data that is replicated until that moment. For more information, refer to this example.

Rolling back a migration

The migrated storage partition serving I/Os from the target location (after fixing the event for host health checks at the source) can be rolled back to its source location. You can roll back a migration from the target system. For more information, refer to this example.

A Rollback operation may be needed if the migrated storage partition does not perform as expected on the target system post migration. Rolling back operation switches the I/O operations to the storage partition at the source location. This operation does not remove the copy-synced storage partition at the specified target location.