Migration instructions for a system replacement
When you replace an existing system with a new system, you need to write a script that uses the Hardware Management Console (HMC) Web Services API to export the DPM configuration from the existing system, and to import the configuration to the new system. You can use any programming language or tool that you currently use to issue the HMC Web Services API. Note that only specific migration paths are supported.
Before you begin
- Table 1
indicates whether a specific DPM migration path is
supported for a system replacement.
- Each DPM version corresponds to particular machine types; for more information, see DPM versioning.
- Downgrades from one version or release to a prior version or release are not supported options. For example, you cannot downgrade from Version 3 Release 1 (R3.1) to Version 2 Release 1 (R2.1), or from R3.2 to R3.1.
- Migrating to the latest release of a given version is the only supported option; for example, you cannot migrate from DPM R2.0 to R3.1 because R3.2 is the latest Version 3 release.
Table 1. Supported DPM migration paths for a system replacement From ↓ To → DPM R3.2 DPM R4.3 DPM R5.2 DPM R2.0 Yes (with HBAs converted to FCP storage groups) Yes (with HBAs converted to FCP storage groups) Yes (with HBAs converted to FCP storage groups, and HiperSockets adapters converted to HiperSockets partition links) DPM R2.1 Yes (with HBAs converted to FCP storage groups) Yes (with HBAs converted to FCP storage groups) Yes (with HBAs converted to FCP storage groups, and HiperSockets adapters converted to HiperSockets partition links) DPM R3.0 No; this path is only an upgrade on the same machine type. Yes (with HBAs converted to FCP storage groups) Yes (with HBAs converted to FCP storage groups, and HiperSockets adapters converted to HiperSockets partition links) DPM R3.1 No; this path is only an upgrade on the same machine type. Yes Yes (with HiperSockets adapters converted to HiperSockets partition links) DPM R3.2 — Yes Yes (with HiperSockets adapters converted to HiperSockets partition links) DPM R4.3 No — Yes (with HiperSockets adapters converted to HiperSockets partition links) - When you migrate from DPM R4.3 or
an earlier version to DPM R5.2, each HiperSockets
adapter that existed on the earlier system is converted into a HiperSockets partition link, which
includes a list of the partitions that were using the
HiperSockets adapter, as well as the network interface cards (NICs) assigned to each partition and
their associated device numbers.
However, permissions on the original HiperSockets adapter are not carried over to the new system, so more users
might have access to the HiperSockets partition links.
With earlier versions of DPM, access to HiperSockets adapters is controlled through the following permissions: HiperSockets adapter object, Adapter Details task, Create HiperSockets Adapter task, and Delete HiperSockets Adapter task. In contrast, with DPM R5.2, access to HiperSockets partition links is controlled through partition link object and task permissions: Partition Link object, Create Partition Link task, Partition Link Details task, and Delete Partition Link task.
- If, on the prior system, your security administrators controlled access to HiperSockets adapters only through the default roles of All System Managed Objects and System Programmer Tasks, no change is required after the migration to DPM R5.2. All of the partition link permissions are included in those two default roles.
- If your security administrators previously used HiperSockets object and task roles to control access, they need to review that access configuration and recreate it on DPM R5.2, using the partition link object and task permissions. For more information about controlling access to partition links, see the Authorization requirements topic in the HMC online help for the Configure Partition Links task.
Also, if any Secure Service Container partitions on the prior system used a HiperSockets connection to access the Secure Service Container web interface, you must reconfigure that connection to use an Open Systems Adapter-Express® (OSA) adapter.
- When you migrate from DPM R3.0 or an
earlier version to DPM R3.2 or a later version,
each host bus adapter (HBA) that a partition uses is converted into a dedicated FCP storage group. A
storage group is a logical group of storage volumes that share certain
attributes; a storage
group provides access to storage resources starting with DPM R3.1. To ensure that this conversion is successful, make
sure that the storage administrator has complied with the following SAN zoning and masking
requirements:
- All world wide port names (WWPNs) that are used by partitions must be added to the zoning of all switches.
- All WWPNs that are used by partitions must be added to the host-mapping of the storage subsystems that provide the logical unit numbers (LUNs) for the storage group.
- Make sure that you have enabled the use of the HMC Web Services API on both the source and the target system. You can enable the use of the API through the HMC Customize API Settings task. For information about the authorization requirements and syntax of specific APIs, see the appropriate edition of Hardware Management Console Web Services API, which is available on IBM Documentation. Go to https://www.ibm.com/docs/en/systems-hardware, select IBM Z or IBM LinuxONE, then select your configuration, and click Library Overview on the navigation bar.
- To perform this procedure, choose a user ID that has authorization to access all of the DPM configuration data that you want to export from the
existing system. If you perform the steps in this procedure with a user ID that has access to only
specific DPM partitions or adapters, only the
configuration data for those resources is exported.
- The suggested choice is the default SERVICE user ID. API access is not enabled for the SERVICE user ID by default, so you must authorize API access for the SERVICE user ID through the HMC User Management task.
- If you choose a user ID other than the default SERVICE user ID, make sure that user ID has the
following permissions.
- API access
- Object permission to both the source and target systems
- Task permission to Import Dynamic Partition Manager Configuration.
- If you have controlled user access to DPM resources, and want to transfer those authorizations, use the Save/Restore Customizable Console Data task to save the data from the source system and restore it on the target system.
- Install the latest version of the open-source command-line tool zhmc, which is available at https://github.com/zhmcclient/zhmccli. Review the accompanying release information and change history. For installation and usage instructions, select the Readme topic to access the instructions in the Documentation topic.
About this task
- If you are migrating from DPM R3.1 or later, the configuration data also includes storage groups and templates, as well as defined FICON connections, such as sites, subsystems, fabrics, volumes, and paths.
- If you are migrating from DPM R4.3 or later, the configuration data also includes FCP tape links.
- When you migrate from DPM R4.3 or
an earlier version to DPM R5.2, each HiperSockets
adapter that existed on the earlier system is converted into a HiperSockets partition link. Although
you can still view HiperSockets adapters through the Manage Adapters task,
you need to use the Configure Partition Links task to review the new
HiperSockets partition links and modify them, if necessary. To do so, you must have a customized
user ID with the predefined System Programmer Tasks role or equivalent permissions.
- For more information about controlling access to partition links, see the Authorization requirements topic in the HMC online help for the Configure Partition Links task.
- For general information about partition links, see Partition links.
- When you migrate from DPM R3.0 or an earlier version
to a target system with DPM R3.2 or later, each HBA that
a partition uses is converted into a dedicated FCP storage group. You need to review these adapter
ID changes and new storage groups, and modify them, if necessary.
- To understand the differences between storage access with DPM R3.0 or an earlier version and storage access with DPM R3.2 or later, see Figure 1.
- For more information about storage management starting with DPM R3.1, see Topics for storage administrators.
- On the source system, an administrator defines storage resources for Partition A by creating four HBAs, each with its own backing FCP-mode adapter. When the partition definition is saved, DPM generates WWPNs that the administrator exports and gives to the storage administrator to complete zoning and LUN masking tasks for the storage subsystem configuration.
- The backing adapters are configured on the source system and are physically connected to a
storage subsystem through a switch. In this example, adapter 0124 is connected to Controller 1
through Switch A in Fabric 0. Adapter 0118 is also connected to Controller 1 through Switch A in
Fabric 0. Similarly, adapters 01CC and 01D8 are connected to Controller 2 through Switch B in Fabric
1.
Partition A uses the WWPNs to access specific volumes on each storage subsystem. This configuration gives Partition A access to eight storage volumes.
- During the migration process, DPM creates a dedicated (not shared) FCP storage group, SG01, and attaches it to Partition A. The HBAs and WWPNs that were defined for Partition A on the source system become part of the infrastructure for the storage group. Through the storage group, Partition A on the target system has access to the same eight volumes.
- Note that the adapter IDs that were in use on the source system are copied to the target system. You can change the adapter IDs by providing an adapter mapping for the migration process in step 4.
- Note that, on a target system with DPM R3.1 or later, switches must be connected to all storage controllers that provide LUNs for a storage group. Because storage group SG01 uses volumes on both Controller 1 (LUNs 1-4) and Controller 2 (LUNs 5-8), Switch A must be connected to both Controller 1 and Controller 2, and Switch B also must be connected to both controllers.
Procedure
Results
- If you migrated from DPM R4.3 or an earlier version to DPM R5.2
-
- For each HiperSockets adapter that existed on the source system, DPM creates a HiperSockets partition link, which includes a list of the partitions that were using the HiperSockets adapter, as well as the network interface cards (NICs) assigned to each partition and their associated device numbers. For each partition in the partition list, the number of NICs in the HiperSockets partition link matches the number of NICs that were defined for use with the HiperSockets adapter on the source system.
- The new HiperSockets partition link is attached to each partition in the partition list.
- If you migrated from DPM R3.1 or later to a target system with a later DPM version
- If you are using an auto-start list to start partitions on the target system, and one or more of those partitions are booted from a storage volume, the start process might fail if the attached storage group is not yet in the Complete fulfillment state. To reduce the possibility of partition start failures, DPM assigns highest priority to storage groups that contain at least one boot volume, so that automatic discovery of the logical unit numbers (LUNs) begins as quickly as possible on the target system.
- If you migrated from DPM R3.0 or earlier to a target system with DPM R3.2 or later
-
- For each partition on the source system that has HBAs assigned for access to storage, DPM creates a dedicated FCP storage group and reuses the HBAs and WWPNs for that storage group. The new storage group is attached to the same partition on the target system.
- If the partition definition specified an HBA, port, and LUN for the Storage (SAN) boot option, the new storage group contains a storage volume that is mapped to the same port and LUN. If the partition does not use a boot volume, the storage group that DPM creates does not contain any storage volumes.
- For each new storage group, DPM sets the Connectivity attribute to the number of distinct adapters that were in use by the HBAs for the partition on the source system. For example, the Connectivity attribute for storage group SG01 in Figure 1 is set to 4 because of the four adapters that were selected to back the HBAs.
- DPM starts automatic discovery of
the LUNs that are configured for the WWPNs assigned to each storage group.
- If no LUNs are discovered (that is, no zoning and masking has been done), DPM sets the fulfillment state of the storage group to Checking migration. For any storage groups in this fulfillment state, DPM attempts LUN discovery every 10 minutes.
- If a discovery attempt results in the same set of LUNs that are detected for all WWPNs over the required number of adapters (as defined by the Connectivity attribute for the storage group), DPM automatically accepts the volumes and sets the fulfillment state of the storage group to Complete. For any storage groups in this fulfillment state, DPM attempts LUN discovery every 24 hours.
- If a discovery attempt results in LUNs that are detected for all WWPNs, but not all WWPNs are configured for the same set of LUNs over the required number of adapters (as defined by the Connectivity attribute for the storage group), DPM sets the fulfillment state of the storage group to Pending with mismatches. For any storage groups in this fulfillment state, DPM attempts LUN discovery every 10 minutes.
What to do next
- If your security administrators used HiperSockets object and task roles to control access to the HiperSockets adapter on the source system, they need to review that access configuration and recreate it on DPM R5.2, using the partition link object and task permissions. For more information about controlling access to partition links, see the Authorization requirements topic in the HMC online help for the Configure Partition Links task.
- If any Secure Service Container partitions on the source system used a HiperSockets connection to access the Secure Service Container web interface, you must reconfigure that connection to use an Open Systems Adapter-Express (OSA) adapter. To do so, open the Partition Details task for the Secure Service Container partition, and go to the Network section to define a new network interface card that is associated with an OSA adapter.
- Go to Storage Overview in the
Configure Storage task to check storage groups and adapter assignments. If
necessary, you can modify the storage group names and other details, or reconfigure system adapters
through Storage Cards.
Check the fulfillment state of the storage groups to determine what actions you might need to take.
- If the fulfillment state does not change from Checking migration to either Complete or Pending with mismatches, have the storage administrator fix the configuration in the storage subsystem. Then open the Connection Report on the Storage Details page for the storage group, and select the Update report icon () so that DPM rechecks the storage group connections and changes the fulfillment state.
- For a fulfillment state of Pending with mismatches, go to the Volumes tab
on the Storage Details page for the storage group. All mismatched volumes are displayed at the top
of the Volumes table, and are marked with a warning icon (). Volumes are considered mismatched when one or more of the following conditions are true.
- The volumes are zoned and masked equally for all of the WWPNs. For these volumes, system administrators have the option of either deleting or keeping the mismatched volumes in the storage group. They can select one or more volumes to keep or delete.
- The volumes are masked and zoned only for a subset of WWPNs. For these volumes, storage administrators must correct the zoning and masking configurations for the WWPNs.
For more information about fulfillment states and possible corrective actions, see The Storage Group details page (this information is also available in the online help for the Storage Group details page).
- If any partitions have multiple HBAs that are backed
by the same adapter, consider modifying the storage group to make sure that the required number of
HBAs are retained if the storage group is detached and then reattached to the same or another
partition. For example, suppose that your source system has one partition with five HBAs that
are backed by only two adapters.
- During the migration process, DPM creates the dedicated FCP storage group, retains all five HBAs that are defined for the partition, and sets the Connectivity attribute of the storage group to 2, which is the number of distinct adapters that were in use by the five HBAs.
- After the migration, an administrator detaches the storage group from the partition.
- If an administrator then reattaches the storage group to the same partition, or attaches it to a different partition, DPM uses only the current Connectivity attribute setting to determine how many HBAs to create when the storage group is reattached. In this case, DPM creates only two HBAs, not five.
- If you have imported configuration data for any Secure Service Container
partitions, use the Partition Details task to check the following
information.
- Go to the General section to reset the default master user ID, the password, or both.
- Go to the Boot section to check the install location for the appliance that the partition hosts. If necessary, reinstall the appliance.
- After you have successfully migrated to DPM R3.1, use the Backup Critical Data task on the Support Element.
- When you detach an FCP storage group from a partition, DPM does not preserve the HBAs, device numbers, or backing
adapters that were in use for that storage group. Consequently, if you reattach the storage group to
the same partition, the device numbers and the backing adapters of the HBAs are not guaranteed to be
the same as they were before the detachment. The same condition is true if you attach the FCP
storage group to a new partition: the device numbers and the backing adapters of the HBAs are not
guaranteed to be the same as they were when the storage group was attached to a different
partition.
If the operating system image for the partition resides on a boot volume in the storage group that you reattach, the operating system might not start when the partition is restarted. (Operating systems that are started from a storage volume usually have a preconfigured device number and a path to the volume.) To avoid this situation, administrators must review the device numbers and the backing adapters of the HBAs, after reattaching the storage group and before restarting the partition. If necessary, the administrators must change the device number to match the preconfigured device number for the operating system, and make sure that the preconfigured device number is assigned to the backing adapter that was assigned to the HBA before the detachment. For more information, see Instructions: Detaching and reattaching an FCP storage group.