- IBM PowerHA SystemMirror
- Hitachi Storage system
- PowerHA SystemMirror interaction with Hitachi
- Steps to install and configure Hitachi/PowerHA XD on a two-node cluster
- Installing Hitachi CCI software for AIX
- Configuring Hitachi mirroring and AIX
- Configuring PowerHA SystemMirror to monitor and manage the Hitachi mirror pools
- Downloadable resources
Hitachi TrueCopy mirroring with IBM PowerHA SystemMirror
IBM PowerHA SystemMirror
IBM® PowerHA® SystemMirror® is a high availability and disaster recovery solution to provide near-continuous resource availability through both planned and unplanned outages. The resources can be data, applications, network interfaces, and so on.
PowerHA SystemMirror can work with different storages (for example, IBM System Storage® DS8000®, IBM XIV® Storage System®, EMC, and Hitachi) enabling disaster recovery and high availability. This article explains how to configure Hitachi Storage system to work with PowerHA SystemMirror.
Hitachi Storage system
Hitachi Storage system supports short-distance replication and long-distance replication through TrueCopy synchronous and asynchronous replication with Hitachi Universal Replicator (HUR) technologies. This article will deal with Hitachi TrueCopy configuration along with PowerHA SystemMirror.
- Is a storage-based replication solution
- Allows synchronous mode of operation, in which:
- Primary array responds to host writes only after target array acknowledges that it has received and checked the data
- Source and target devices contain identical copies
PowerHA SystemMirror interaction with Hitachi
PowerHA SystemMirror supports high availability disaster recovery (HADR) for storage from the Hitachi Storage systems.
Mirroring requires two or more Hitachi Storage systems. The source and target of the mirroring can reside on the same site and form a local mirroring, or the source and target can reside on different sites and enable a disaster recovery plan.
Hitachi TrueCopy operations involve the primary (main) subsystems and the secondary (remote) subsystems. The primary subsystems contain the TrueCopy primary volumes (P-VOLs), which are the original data volumes. The secondary subsystems contain the TrueCopy secondary volumes (S-VOLs), which are the synchronous or asynchronous copies of the P-VOLs.
Hitachi enables a set of remote mirrors to be grouped into a device group. A device group is a set of related volumes on the same Hitachi system that is treated as a single consistent unit. When using mirroring, the device groups handle many remote mirror pairs as a group to make mirrored volumes consistent.
Figure 1: Two-site cluster configuration with Storage Mirroring
As shown in Figure 1, we have two sites. Site here denotes a group of nodes placed together in the same geographic location. PowerHA SystemMirror currently supports two sites, that is, one set of nodes present in one site and another set of nodes present on another site, but both are part of the same cluster. Here, both locations can be nearby or geographically separated.
In case of any complete site failure because of a disaster such as fire or earthquake, PowerHA SystemMirror moves the resources to the other site. Because data is mirrored across the Hitachi Storage systems in both sites and is consistent, the resources will be up and running in the other site, thereby maintaining high availability.
Make sure that the following prerequisites are fulfilled for configuring PowerHA SystemMirror with HItachi Storage:
- PowerHA SystemMirror 6.1 (Enterprise Edition)
- PowerHA 6.1 SP3 or later
- IBM AIX® and RSCT requirements
- AIX 6.1 TL4 or later
- RSCT V18.104.22.168 or later
- AIX 7.1 or later
- RSCT V22.214.171.124 or later
- AIX 6.1 TL4 or later
- Hitachi USPV/VM Storage
- Firmware software bundle V60-06-05/00 or later
- Hitachi Command Control Interface (CCI) version 01-23-03/06 or higher (must be installed on all PowerHA nodes
Our test setup includes the following software versions:
- PowerHA SystemMirror version 7.1.3 SP2
- Hitachi Command Control Interface (CCI) version 01-23-03/06
Logical unit numbers (LUNs) are assigned to nodes using the Hitachi Storage Navigator.
Steps to install and configure Hitachi/PowerHA XD on a two-node cluster
PowerHA SystemMirror Enterprise Edition enablement for high availability and disaster recovery (HADR) of Hitachi mirrored storage involves a two-step procedure.
Step 1: Install and configure the Hitachi/PowerHA XD
Step 2: Use PowerHA SystemMirror Enterprise Edition interfaces to discover the deployed storage devices and define the HA policies.
Installing Hitachi CCI software for AIX
The following steps need to be performed on all test nodes of the cluster.
- Copy the RMHORC file to the root directory (/).
- Run the following command:
cpio -idmu < RMHORC
A directory with name HORCM will be created.
- Go to above directory and run the horcminstall.sh file to install the Hitachi CLI.
- Verify that the Hitachi software is installed properly using the following command and note the version information that is displayed.
# raidqry -h Model: RAID-Manager/AIX Ver&Rev: 01-23-03/06 Usage: raidqry [options] for HORC
Configuring Hitachi mirroring and AIX
The Hitachi system offers both GUI and CLI commands to configure and monitor the system. This section explains the creation of mirrors or pairs of Hitachi mirroring using GUI.
Following are the details of the cluster nodes and the corresponding IPs for a test environment.
- node1 <IP1>
- node2 <IP2>
- node3 <IP3>
- node4 <IP4>
Site A is the primary site (also known as the production site) and it includes the nodes: node1 and node2. It also includes the primary storage, Hitachi1.
Site B is the secondary site (also known as the recovery site) and it includes the nodes: node3 and node4. It also includes the secondary storage, Hitachi2.
- Hitachi1: Primary storage (Hitachi_IP1) also referred as master.
- Hitachi2: Secondary storage (Hitachi_IP2) also referred as slave.
The Hitachi disks must be shared among all the nodes of the same site.
Perform the following steps to configure Hitachi mirroring and AIX:
- Make sure that the Hitachi disks exist on your systems. Run the following command on all nodes from the /HORCM/usr/bin directory.
lsdev -Cc disk | grep hdisk | inqraid
A list of hdisks will be displayed. Search for the parameter,
CM(command device). For each node there will be a disk with this parameter. That disk will be used as
COMMAND_DEVlater in the horcm.conf file.
For example, in the following output, hdisk11 is configured as a command device for this node.
hdisk11 -> [SQ] CL4-C Ser = 53105 LDEV =40961 [HITACHI ] [OPEN-V-CM]
RAID5[Group 1- 2] SSID = 0x00A4
In the following output, the two disks, hdisk2 and hdisk7 are also Hitachi disks which will be used in Hitachi TrueCopy relationships.
hdisk2 -> [SQ] CL4-C Ser = 53105 LDEV = 2 [HITACHI ] [OPEN-V] HORC = S-VOL HOMRCF[MU#0 = SMPL MU#1 = SMPL MU#2 = SMPL] RAID5[Group 1- 1] SSID = 0x0004 hdisk7 -> [SQ] CL4-C Ser = 53105 LDEV = 64 [HITACHI ] [OPEN-V] HORC = P-VOL HOMRCF[MU#0 = SMPL MU#1 = SMPL MU#2 = SMPL] RAID5[Group 1- 1] SSID = 0x0004
Here, S-VOL is the secondary volume and P-VOL is the primary volume.
From this step, we can identify the Hitachi Command Device and mirroring disks.
- Create a HORCM instance.
Go to the /HORCM/usr/bin directory. Use mkconf.sh to create an instance. Here, 0 is the instance number.
Note: You can use any number. If you use 0, the file name will be horcm0.conf and likewise.
./mkconf.sh -i 0
This creates the horcm0.conf file in the same directory. Move the horcm0.conf file to /etc/. Repeat this step on all the nodes.
Note: The command may hang but anyway it will create the file. When it hangs, use
Ctrl+C to exit. A soft link horcmgr pointing to file /HORCM/etc/horcmgr should be present in /etc. If not present, create the soft link and then start horcm services using the
horcmstart.sh 0command in /HORCM/usr/bin.
- Configure the horcm0.conf file.
Sample horcm.conf file from Site A:
# Created by mkconf.sh on Thu Jan 22 03:32:09 CST 2015 HORCM_MON #ip_address service poll(10ms) timeout(10ms) node1 IP 52323 1000 3000 HORCM_CMD #dev_name dev_name dev_name #UnitID 0 (Serial# 53105) /dev/rhdisk9 HORCM_LDEV #dev_group dev_name Serial# CU:LDEV(LDEV#) MU# DVGP1 DVNM1 53105 548 0 HORCM_INST #dev_group ip_address service DVGP1 node3 IP 52323
Sample horcm.conf file from Site B:
# Created by mkconf.sh on Thu Jan 22 03:34:37 CST 2015 HORCM_MON #ip_address service poll(10ms) timeout(10ms) node3 IP 52323 1000 3000 HORCM_CMD #dev_name dev_name dev_name #UnitID 0 (Serial# 53105) /dev/rhdisk5 HORCM_LDEV #dev_group dev_name Serial# CU:LDEV(LDEV#) MU# DVGP1 DVNM1 53105 553 0 HORCM_INST #dev_group ip_address service DVGP1 node1 IP 52323
The content of the horcm0.conf file should be the same for all the nodes of a specific site.
Edit the horcm0.conf file on all the nodes accordingly and start the horcm instance using
horcmstart.sh 0in the /HORCM/usr/bin directory.
The following table explains the terms used in the horcm.conf file.
Table 1. : Terms used in the horcm.conf file
Term Description HORCM_MON Under IP address, give the node IP address (rest all fields are same as shown in the sample file). HORCM_CMD Under this entry use the hdisk which we selected earlier having the parameter,
HORCM_LDEV Under this entry, enter a user-defined device group name, such as DVGP1 and user-defined device name, such as DVNM1. Under one group, we can have multiple devices with different device names and each device name representing a mirror. The device group name and the device name should be the same across the clusters when creating the horcm.conf file for all nodes. CU:LDEV(LDEV#) Run the
lsdev -Cc disk | inqraidcommand to get the LDEV number of disk that will be used in user-defined device name such as DVNM1.
HORCM_INST Under this entry, enter a device group name and the IP of the remote node of the other site. Device group name is same for all nodes of all sites.
Create the pairs.
Stop the horcm0 instance by running the following command on all the nodes.
For pair creation, we will use Hitachi Storage Navigator. Connect to Hitachi Storage Navigator with Hitachi Storage Navigator IP and login credentials.
- On the home page, click Actions → Remote Copy → TrueCopy → PairOperation. Figure 2: Pair Operation
- Start the servlet using Java Web Start Launcher. Figure 3: Java Web Start Launcher
- Notice the TrueCopy Pair Operation page that is displayed. Figure 4: Storage Subsystem list
- Select the subsystem (port) where the disks to the nodes are coming from. In this example, it is CL4-C (we can get this information using the following command:
lsdev -Cc disk | inqraid(Note: Run the command on all nodes)
output - hdisk2 -> [SQ] CL4-C Ser = 53105 LDEV = 2 [HITACHI ] ...
In the output, we can see CL4-C. Figure 5: Hitachi disk list for CL4-C
- Change view mode to edit mode as shown in the following figure.Figure 6: View modeFigure 7: Edit mode
- Select the disk of the primary site and create a pair with the disk of the secondary site.
- Perform the following steps to select the disks of the primary site nodes and their corresponding disks at the secondary site.
From node 1 of Site A:
# lsdev -Cc disk | grep hdisk | inqraid hdisk0 -> NOT supported INQ. [IBM] [IC35L073UCDY10-0] hdisk1 -> NOT supported INQ. [IBM] [IC35L073UCDY10-0] hdisk2 -> NOT supported INQ. [IBM] [IC35L073UCDY10-0] hdisk3 -> NOT supported INQ. [IBM] [IC35L073UCDY10-0] hdisk4 -> NOT supported INQ. [IBM] [IC35L073UCDY10-0] hdisk5 -> NOT supported INQ. [IBM] [IC35L073UCDY10-0] hdisk6 -> [SQ] CL4-C Ser = 53105 LDEV = 63 [HITACHI ] [OPEN-V] HORC = SMPL HOMRCF[MU#0 = SMPL MU#1 = SMPL MU#2 = SMPL] RAID5[Group 1- 1] SSID = 0x0004 hdisk7 -> [SQ] CL4-C Ser = 53105 LDEV = 64 [HITACHI ] [OPEN-V] HORC = SMPL HOMRCF[MU#0 = SMPL MU#1 = SMPL MU#2 = SMPL] RAID5[Group 1- 1] SSID = 0x0004 hdisk8 -> [SQ] CL4-C Ser = 53105 LDEV = 65 [HITACHI ] [OPEN-V] HORC = SMPL HOMRCF[MU#0 = SMPL MU#1 = SMPL MU#2 = SMPL] RAID5[Group 1- 1] SSID = 0x0004 hdisk9 -> [SQ] CL4-C Ser = 53105 LDEV = 66 [HITACHI ] [OPEN-V] HORC = P-VOL HOMRCF[MU#0 = SMPL MU#1 = SMPL MU#2 = SMPL] RAID5[Group 1- 1] SSID = 0x0004 hdisk10 -> [SQ] CL4-C Ser = 53105 LDEV = 67 [HITACHI ] [OPEN-V] HORC = P-VOL HOMRCF[MU#0 = SMPL MU#1 = SMPL MU#2 = SMPL] RAID5[Group 1- 1] SSID = 0x0004 hdisk11 -> [SQ] CL4-C Ser = 53105 LDEV = 68 [HITACHI ] [OPEN-V] HORC = P-VOL HOMRCF[MU#0 = SMPL MU#1 = SMPL MU#2 = SMPL] RAID5[Group 1- 1] SSID = 0x0004 hdisk12 -> [SQ] CL4-C Ser = 53105 LDEV = 69 [HITACHI ] [OPEN-V] HORC = SMPL HOMRCF[MU#0 = SMPL MU#1 = SMPL MU#2 = SMPL] RAID5[Group 1- 1] SSID = 0x0004 hdisk13 -> [SQ] CL4-C Ser = 53105 LDEV = 70 [HITACHI ] [OPEN-V] HORC = P-VOL HOMRCF[MU#0 = SMPL MU#1 = SMPL MU#2 = SMPL] RAID5[Group 1- 1] SSID = 0x0004 hdisk14 -> [SQ] CL4-C Ser = 53105 LDEV = 71 [HITACHI ] [OPEN-V] HORC = P-VOL HOMRCF[MU#0 = SMPL MU#1 = SMPL MU#2 = SMPL] RAID5[Group 1- 1] SSID = 0x0004 hdisk15 -> [SQ] CL4-C Ser = 53105 LDEV = 72 [HITACHI ] [OPEN-V] HORC = SMPL HOMRCF[MU#0 = SMPL MU#1 = SMPL MU#2 = SMPL] RAID5[Group 1- 1] SSID = 0x0004 hdisk16 -> [SQ] CL4-C Ser = 53105 LDEV =40968 [HITACHI ] [OPEN-V-CM] RAID5[Group 1- 2] SSID = 0x00A4
From the output, it can be seen that hdisk6 to hdisk16 are Hitachi disks.Figure 8: Hitachi disk list for CL4-C
On the Storage Navigator page shown in Figure 8, we can see disks from CL4-C-02-000 onwards. So, the mapping is like this:
- hdisk6 → CL4-C02-000
- hdisk7 → CL4-C02-001
- hdisk8 → CL4-C02-002
- hdisk9 → CL4-C02-003
The same is applicable for the other node 3 for Site B. We give the same command to get the list of hdisks for Hitachi. In the storage navigator GUI, select node 3 of site B and check the disks under that. We do the same sequential mapping.
For example, on the Storage Navigator page, under CL4-C, click node1 (which in this example is 02:love2_russel.
On the right pane, right-click CL4-C-02-001 and then click Paircreate → Synchronous.Figure 9: Synchronous option Figure 10: Paircreate option
P-VOL is the one we selected from node1 of Site A. We have to select S-VOL now. S-VOL is the secondary volume to be selected from node3 from Site B.
S-VOL has three fields.
- First field is the same as P-VOL (as per Figure 9, it is CL4-C).
- Second field is the index of the Site B disk list label. For example, as shown in Figure 11, 01:r1r4-eth_parseghia, is the Site B disk list label. So, index here is 01.Figure 11: LPAR list under CL4-C
- Third field is the same as the third field of P-VOL (as per Figure 10, it is 000).
We can create a pair when the disk is in the SMPL (Simplex) state. If it is already in the PAIR state, we can use it if it is not being used. If it is in the PSUS state or any other state, we can just change it to the SMPL state by deleting the pair. The Pairsplit -S option is displayed when you right-click the disk. Figure 12: Disk status page after creating pairs
Click Apply to create a TrueCopy pair.
- Verify that pair creation is successful.
horcm0instance by running
horcmstart.sh 0(remember to start on all nodes).
To verify whether the pairs are created properly and also the
horcm0instance of the other node is accessible from the current node, run the necessary commands on all the nodes.
The following command checks pair status on the local node.
pairvolchk -ss -g DVGP1 -IH0
pairvolchk : Volstat is P-VOL.[status = PAIR fence = NEVER MINAP = 2 ]
The following command checks pair status on the remote node.
pairvolchk -c -ss -g DVGP1 -IH0
pairvolchk : Volstat is P-VOL.[status = PAIR fence = NEVER MINAP = 2 ]
The following command displays the pair status of a device group.
pairdisplay -g DVGP1 -IH0
Group PairVol(L/R) (Port#,TID, LU),Seq#,LDEV#.P/S,Status,Fence,Seq#,P-LDEV# M DVGP1 DVNM1(L) (CL2-C-3, 0, 1)53105 548.P-VOL PAIR NEVER ,53105 553 - DVGP1 DVNM1(R) (CL4-C-4, 0, 1)53105 553.S-VOL PAIR NEVER ,----- 548 -
All the three commands should work and give proper output as shown above. Figure 13: Disk status after creating pairs
- Ensure that same PVID is available for Hitachi disks across all nodes.
After pairs are completed successfully, we have to ensure that all the nodes of the cluster have the same PVID for the disks for which pair is created. To do this, run the following commands on all Site B nodes.
rmdev 0dl hdisk#
This will make sure that the Site B node disk has the same PVID as that of the Site A node disk.
- Create volume group, logical volume, and file systems.
Perform the following steps to create a volume group (VG), a logical volume (LV), and a file system on the cluster nodes:
- Make sure that major number is same for the VG on all nodes. To get a free major number, run the
lvlstmajorcommand on all the nodes and pick a number available on all nodes.
- Create a volume group, say VG1, on the chosen Hitachi disk of node1 of Site A using the major number that we just got. Create a logical volume and a file system for VG1.
- Import the VG to all other nodes with same major number.
- Make sure that major number is same for the VG on all nodes. To get a free major number, run the
- Add Hitachi disks to the PowerHA SystemMirror resource groups.
At this point, we have created mirroring between the Hitachi disks and it is active. Also, VGs are created on the required disks for all the nodes on all the sites. This completes the Hitachi mirroring and its corresponding AIX configuration.
Configuring PowerHA SystemMirror to monitor and manage the Hitachi mirror pools
Perform the following steps to configure PowerHA SystemMirror to monitor and manage the Hitachi mirror pools.
smit sysmirrorat the shell prompt. Then, click Cluster Applications and Resources → Resources → Configure HITACHI TRUECOPY(R)/HUR Replicated Resources. Figure 14: PowerHA SystemMirror menu for Hitachi resource
- To define a Truecopy/HUR-managed replicated resource, complete the following steps:
Move to the Add Hitachi Truecopy/HUR Replicated Resource option and press Enter (or directly use
smit tc_def). Figure 15: Hitachi resource Change/Show menu
Fill the fields as shown in Figure 15.
The value in the Resource Name field can be any name of your choice.
- To add a TrueCopy replicated resource to a PowerHA SystemMirror Enterprise Edition resource group, complete the following steps:
- At the command line, enter smitty hacmp.
- In SMIT, select Cluster Applications and Resources → Resource Groups → Add a Resource Group.
- After adding a resource group, we need to add the Hitachi resource to it.
- In SMIT select, Cluster Applications and Resources → Resource Groups → Change/Show All Resources and Attributes for a Resource Group.
- Make sure that the volume groups selected on the Resource Group configuration page match the volume groups used in TrueCopy/HUR Replicated Resource.
- The Truecopy Replicated Resources entry appears at the bottom of the page in SMIT. This entry is a pick list that displays the resource names created in the previous step.
Note: Volume group is the same VG1 that you created in the previous step. Figure 18: Resource group properties (continued) Figure 16: Resource group properties Figure 17: Resource group properties (continued)
- Verify and sync the cluster and start cluster services on all nodes.
When an error occurs in pair operations using CCI, you can identify the cause of the error by referring to the following log file:
* is the instance number and HOST is the host name.