Migrating storage partitions between systems
IBM Storage Virtualize supports the non-disruptive migration of storage partitions between systems. This enables a partition to be migrated from one Storage Virtualize system to another without any application downtime.
- Migrating to a new storage array before decommissioning the original system.
- Load-balancing by migrating storage partitions from overloaded systems to other systems.
- All the objects that are associated with the storage partition are moved to the migration target storage system.
- Host I/O is served from the migration target storage system once the migration completes.
- At the end of the migration process, the storage partition and all of its objects are removed from the source system.
For more information, see chpartition command to automate the procedure of migration of storage partitions. This automates intermediate steps, such as setting up Fibre Channel partnerships between the systems.
Prerequisites
-
Migrating a storage partition requires both Fibre Channel and IP connectivity between the source and target storage systems. The requirements are the same as configuring high availability. Review the high availability requirements to ensure that the storage partition supports being migrated and the host operating systems support this feature, see Planning high availability.
- Make sure that both systems have their certificates added to the truststore of each other, with
the REST API usage enabled. For more information, see mktruststore command.Note: It is required that the systems that are involved in storage partition migrations, the time must be synchronized between the systems. The best way to achieve this is to configure each system to use a Network Time Protocol (NTP) service.
- Make sure that both systems have their certificates added to the truststore of each other, with
the REST API usage enabled. For more information, see mktruststore command.
- Confirm that both systems are members of the same Flash Grid, or that neither system is a member of a Flash Grid, and that both systems meet the requirements for Flash Grid. For more details, see Flash Grid.
- The two systems can either have a partnership already configured or be zoned such that a partnership can be created by the migration process. Use partnership views on the GUI or CLI to validate the requirement.
- Although the systems are visible to each other, an identified system can have multiple suitable storage pools that are able to host the migrated storage partition. Establish a pool link between systems on the current source and target systems. For more information, see chmdiskgrp command. To create linked child pools, use the Pools panel on the management GUI on the system that does not have the child pools created.
Procedure
- Run the chpartition command with the -location option to migrate the storage partition to its required system location.
-
The following example initiates a migration of storage partition to the designated location system:
chpartition -location <remote_system> mypartition1
The resulting output:
No Feedback
Note: If there are two or more systems with the same name, then specify remote system ID instead of remote system name while initiating the migration. - To check the migration status, run lspartition. Note: You can monitor the progress of the migration, including the amount of data remaining to be copied, using the lsvolumegroupreplication command.
- Once the storage partition's data and configuration has been migrated to the target storage
system, an event prompts the storage administrator to confirm that the affected hosts have
discovered paths to the volumes in the target storage system.
Note: The event for confirmation of path discoveries by the mapped hosts does not get raised if all the hosts mapped to a partition are known to rescan the storage automatically after regular intervals. These host attributes can be managed by using the autostoragediscovery option in mkhostand chhost CLIs.
- Another event prompts the storage administrator to remove the copy of the data and configuration
of the storage partition from the source storage system to complete the migration.
Before committing the migration, users should verify the host's
multipath configurations, ensure path availability and redundancies, and assess IO performance at
the target system. This event has been raised and fixed in the target storage system.
Note: The migration can be canceled and the partition rolled back to its original system before fixing the event if the administrator observes any issues with the migrated storage partition.Rollback operation might need a confirmation at source system for either continuing or canceling the migration operation. This decision can be made if the storage administrator is able to resolve the environment issues at target system, which required the rollback initiation at first. To continue the migration operation without continuing with rollback, run the following at source system:
To continue the rollback and cancel the partition migration, run the following at source system:chpartition -migrationaction fixeventwithchecks <partition_id/name>
chpartition -override -location <source_system> <partition_id>
-
An informational event at the target storage system marks the completion of the storage partition migration. This event is raised at target storage system.
Note:You can monitor the migration using migration_status field that is shown by lspartition command indicating that there is no migration activity active or queued for that storage partition.
A ongoing storage partition migration can be aborted by specifying with a new migration location that uses “-override” option. The migration to the new location gets queued behind existing queued migrations, if any. For more information, see chpartition.
An active or queued migration operation gets aborted when its source system location is provided as a specified location. This needs an -override option since the abort operations remove the data that is replicated until that moment. For more information, refer to this example.
Limits and restrictions
- Currently draft storage partition, policy-based high availability, or data replication relationships are not supported for automated migrations.
- Storage Partition Migrations is supported only over Fibre Channel Partnerships.
- Only one storage partition can be migrated at a time from a system. Any consecutive migrations that are attempted get queued, and are scheduled automatically as per the sequence of invocations when the earlier migrations complete.
Rolling back a migration
The migrated storage partition serving I/Os from the target location (after fixing the event for host health checks at the source) can be rolled back to its source location. You can roll back a migration from the target system. For more information, refer to this example.
Rollback operation may be needed if the migrated storage partition does not perform as expected on the target system post migration. Rolling back operation switches the I/O operations to the storage partition at the source location. This operation does not remove the copy-synced storage partition at the specified target location.