How To
Summary
The most commonly used approach to migrate shared volume groups from the current storage array to a new one in a PowerHA cluster is done by mirroring and unmirroring the volume group. However, there can be business requirements to preserve a copy of the volume group on the old storage array as a backup.
In this storage migration guide, we show how to migrate shared concurrent volume groups managed by PowerHA to new storage and keep a copy of the volume group on the old storage. No downtime is required.
Objective
The purpose of this document is to provide clear guidelines for AIX system administrators on:
1- How to migrate PowerHA shared volume groups to a new storage array and keep an independent copy of the volume group on the old storage.
1- How to migrate PowerHA shared volume groups to a new storage array and keep an independent copy of the volume group on the old storage.
2- How to avoid gsclvmd timeouts.
To avoid a gsclvmd timeout, we recommend mirroring large quantities of data outside of C-SPOC. The gsclvmd has a timeout period of 5 minutes (300 seconds). When gsclvmd times out, it can cause the volume group to go offline on the secondary node. This condition causes the C-SPOC mirror operation to fail.
As discussed in (C-SPOC : "Mirror a Volume Group" can hang in PowerHA due to gsclvmd timeouts: Alternate process for prevention) there is no specific data size, which causes a timeout. Copying data sets of 1 TB or larger can cause a timeout but smaller amounts of data can also cause this condition depending on the capabilities and utilization of the systems involved.Environment
AIX 7.1.3 or later.
PowerHA® SystemMirror® for AIX 7.1.3 or later.
PowerHA® SystemMirror® for AIX 7.1.3 or later.
Steps
The following figure shows the attributes of vg1, a small enhanced concurrent volume group, currently residing on hdisk2.
This volume group contains a single logical volume created for testing.






The following 13 steps do not require downtime. However, we recommend that you pick a time where there is no high load of I/O operation on the system.
- Work with your SAN team on correctly mapping the new disks to all cluster nodes.
- Run cfgmgr and ensure that disks have the same PVID on all nodes.
- Confirm that all disks have a reservation policy set to no_reserve. Use commands
# lsattr -EL hdisk#
# chdev -l hdisk# -a pv=yes
# chdev -l hdisk# -a reserve_policy=no_reserve - Add the new disks to the shared volume group by using the fast path smitty cl_extendvg.
- Pick the volume group you want to extend and press Enter.

- Next, pick the disk you need to add to the volume group and press Enter.

- The system shows the selection made, as seen in the following figure, press Enter to confirm adding the disk to the volume group.

- Or you can use the command line
# /usr/es/sbin/cluster/cspoc/cli_extendvg vgname hdisk# - Run the command (lsvg -p vg_name to verify that the new disk is added. Note all of the PVs are free in the new disk.

- Pick the volume group you want to extend and press Enter.
- Stop cluster services on the secondary node with the bring RG offline option. When HA is stopped, the shared volume groups are varied off. Now export the volume group.
Commands to use
# smitty clstop
# lsvg -o
# exportvg vgname - Mirror the volume group to the new disk by using the command:
# mirrorvg -c 2 vgname hdisk3 - Use the following commands to verify the LVM mirror consistency, in this guide vg1 is mirrored to hdisk3 and 2 copies are created as seen in the following figure:
# lsvg -l vg_name
# lslv -m lv_name
# lslv lv_name - Use the splitvg command to split the mirror copy of the volume group off. The command generates a snapshot volume group on the old disk when you specify copy number 1 using the -c flag. The result is that the "vg1" volume group stops accessing hdisk2.
NOTE: the newly created snapshot volume group is not part of the PowerHA managed resources.
Use the -y flag to specify a name for the snapshot volume group otherwise, a name is generated automatically.
Use the -i flag to create independent copies of the volume group.
Note: the command can be executed while the volume group is online.
In this test, I choose to split copy 1 that is on hdisk2 to be the snapshot volume group vgbkp, and keep the original vg1 on hdisk3 in the new storage array. The lvlv -m command shows partitions listed under PV1, PV2, and PV3. The PV number correlates to the copy number used with the -c option.
# lslv -m lvname
# splitvg -y <newvg> -c 1 -i vgname
-
After splitting the Enhanced concurrent VG, the new snapshot VG must be varied on manually. To vary on in nonconcurrent mode:
# varyonvg vg1bkpIn this test, I varied on the volume group in nonconcurrent mode.

- Now import the production volume group on the secondary node.
# importvg -v "major number" -y vgname hdisk#

- Start the cluster services on the secondary node and PowerHA varies on the volume group with the correct permissions.
- Finally, before you remove the old disks from SAN side, remove them from AIX side. Use the following commands on all nodes where vgname would be vg1bkp in the previous example for the snapshot volume group created by the splivg command.
# exportvg vgname
# rmdev -dl hdisk#
Additional Information
Splitvg command
https://www.ibm.com/docs/en/aix/7.2?topic=s-splitvg-command
https://www.ibm.com/docs/en/aix/7.2?topic=s-splitvg-command
Document Location
Worldwide
[{"Type":"MASTER","Line of Business":{"code":"LOB08","label":"Cognitive Systems"},"Business Unit":{"code":"BU054","label":"Systems w\/TPS"},"Product":{"code":"SSPHQG","label":"PowerHA SystemMirror"},"ARM Category":[{"code":"a8m3p000000hAxWAAU","label":"PowerHA System Mirror-\u003EC-SPOC-\u003ELVM"}],"ARM Case Number":"","Platform":[{"code":"PF002","label":"AIX"}],"Version":"All Versions"}]
Was this topic helpful?
Document Information
Modified date:
30 November 2021
UID
ibm16514853


