BM* PowerHA SystemMirror for AIX*, Versions 5.5, 6.1, and 7.1 extends support to include IBM Virtual I/O Server (VIOS) V2.2 Shared Storage Pool virtual SCSI, virtual Ethernet, and NPIV devices on all PowerHA supported IBM POWER6* and POWER7* processor-based servers along with IBM Blades.
IBM PowerHA SystemMirror for AIX Supports IBM VIO Server V2.2 Shared Storage Pool
December 10, 2010
IBM* PowerHA SystemMirror for AIX*, Versions 5.5, 6.1, and 7.1 extends support to include IBM Virtual I/O Server (VIOS) V2.2 Shared Storage Pool virtual SCSI, virtual Ethernet, and NPIV devices on all PowerHA supported IBM POWER6* and POWER7* processor-based servers along with IBM Blades.
Please refer to the following table for support details.
AIX Client LPARs Running PowerHA
| || |
AIX V5.3 TL12,
AIX 6.1 TL3
PowerHA V5.5 APAR IZ84440,
PowerHA SystemMirror v6.1
AIX V5.3 TL12,
AIX V6.1 TL05,
PowerHA V6.1 SP3,
PowerHA SystemMirror V7.1
AIX V6.1 TL06,
PowerHA XD GLVM is supported with VIO
PowerHA 7.1 supports cluster communication between hosts using SAN connections that exist between them. More information about the SAN communication can be found in the PowerHA 7.1 documentation. With the release of VIOS 184.108.40.206-FP24SP01 SAN communication can be established for PowerHA LPARs/nodes even when the storage adapters are virtualized through VIOS. SAN communication is enabled by establishing a VLAN between the VIOS client and the VIOS (setup instructions are available in AIX documentation). SAN communication through VIOS can be setup for both NPIV and vSCSI environments.
PowerHA 7.1 is fully supported with all supported previous levels of IBM VIO Server. However, these levels do not provide support for SANCOM, which requires features only available in VIO Server 2.2.
PowerHA and Virtual SCSI/NPIV
The volume group must be defined as “Enhanced Concurrent Mode.” In general, Enhanced Concurrent Mode is the recommended mode for sharing volume groups in PowerHA clusters because volumes are accessible by multiple PowerHA nodes, resulting in faster fallover in the event of a node failure. If file systems are used on the standby nodes, they are not mounted until the point of fallover so accidental use of data on standby nodes is impossible. If shared volumes are accessed directly (without file systems) in Enhanced Concurrent Mode, these volumes are accessible from multiple nodes so access must be controlled at a higher layer such as databases.
If any cluster node provides access to shared volumes through VIOS provided by IBM VIO Server V2.1, or by IBM VIO Server V2.2 without NPIV, all nodes must do so. This means that disks cannot be shared between an LPAR using by IBM VIO Server V2.1, or by IBM VIO Server V2.2 without NPIV and a node directly accessing those disks. Disks can be shared between an LPAR using VIOS provided by IBM VIO Server V2.1 or 2.2 with NPIV and a node directly accessing those disks.
From the point of view of the VIO Server, physical disks (hdisks) are shared, not logical volumes or volume groups. All volume group construction and maintenance on these shared disks is done from the PowerHA nodes, not from the VIO Server.
PowerHA and Virtual Ethernet
IP Address Takeover (IPAT) via Aliasing must be used. IPAT via Replacement and MAC Address Takeover are not supported. In general, IPAT via Aliasing is recommended for all PowerHA networks that can support it.
PowerHA’s “PCI Hot Plug” facility cannot be used. PCI Hot Plug operations are available through the VIO Server. Note that when a PowerHA node is using Virtual I/O, PowerHA’s “PCI Hot Plug” facility is not meaningful because the I/O adapters are virtual rather than physical.
All Virtual Ethernet interfaces defined to PowerHA should be treated as “single-adapter networks” as described in the PowerHA Planning and Installation Guide. In particular, the netmon.cf facility must be used to monitor and detect failures of the network interfaces. The netmon.cf file should contain a list of IP addresses chosen so that the physical interfaces must be up in order to ping. These addresses must be preceded by "!REQ". See the text of APAR IZ01331 for further details, available from techlink.austin.ibm.com. Due to the nature of Virtual Ethernet, other mechanisms to detect the failure of network interfaces are not effective.
Further configuration-dependent attributes of PowerHA with Virtual Ethernet
If the VIO Server has multiple physical interfaces on the same network or if there are two or more PowerHA nodes using one or more VIO Servers in the same frame, PowerHA will not be informed of (and hence will not react to) individual physical interface failures. This does not normally limit the availability of the entire cluster because VIOS itself routes traffic around individual failures. The VIOS support is analogous to EtherChannel in this regard. Even in the extreme case where all physical interfaces managed by VIO Servers have failed, the VIOS will continue to route traffic from one LPAR to another in the same frame, and the virtual Ethernet interface used by PowerHA will not be reported as having failed and PowerHA will not react. Other methods (based on the VIO Server and not PowerHA) must be used for providing notification of individual adapter failures.
If the VIO Server has only a single physical interface on a network then a failure of that physical interface will be detected by PowerHA. However, that failure will isolate the node from the network.
Although some of these may be viewed as configuration restrictions, many are direct consequences of I/O virtualization.
All support and restrictions associated with Virtual Ethernet provided by IBM VIO Server V2.2 apply to the corresponding level of IVE.
Service can be obtained from the IBM Electronic Fix Distribution site at:
For questions or concerns, please send a note to HA Feedback at:
HA Solutions Feedback/Poughkeepsie/IBM or email@example.com
* Trademark or registered trademark of International Business Machines Corporation.
Other company, product, and service names may be trademarks or service marks of others.
Original Published Date
19 October 2021