POWER7 information

Configuring the system to create shared storage pools

Learn about configuring the system to create Virtual I/O Server (VIOS) shared storage pools.

Before creating shared storage pools, ensure that all logical partitions are preconfigured by using the Hardware Management Console (HMC) as described in this topic. The following are the supported number of characters for the names:
  • Cluster: 63
  • Storage pool: 127
  • Failure group: 63
  • Logical unit: 127

Configuring the VIOS logical partitions

Configure 16 VIOS logical partitions with the following characteristics:
  • There must be at least one CPU and one physical CPU of entitlement.
  • The logical partitions must be configured as VIOS logical partitions.
  • The logical partitions must consist of at least 4 GB of memory.
  • The logical partitions must consist of at least one physical Fibre Channel adapter.
  • The rootvg device for a VIOS logical partition cannot be included in storage pool provisioning.
  • The associated rootvg device must be installed with VIOS Version 2.2.2.0, or later.
  • The VIOS logical partition must be configured with sufficient number of virtual server Small Computer Serial Interface (SCSI) adapter connections required for the client logical partitions.
  • The VIOS logical partitions in the cluster require access to all the SAN-based physical volumes in the shared storage pool of the cluster.
One VIOS logical partition must have a network connection either through an Integrated Virtual Ethernet adapter or through a physical adapter. On VIOS Version 2.2.2.0, clusters support virtual local area network (VLAN) tagging.
Note: In shared storage pools, the Shared Ethernet Adapter must be in threaded mode. For more information, see Network attributes.
Restriction: You cannot use the logical units in a cluster as paging devices for PowerVM® Active Memory™ Sharing or Suspend/Resume features.

Configuring client logical partitions

Configure client logical partitions with the following characteristics:

  • The client logical partitions must be configured as AIX® or Linux client systems.
  • They must have at least 1 GB of minimum memory.
  • The associated rootvg device must be installed with the appropriate AIX or Linux system software.
  • Each client logical partition must be configured with a sufficient number of virtual SCSI adapter connections to map to the virtual server SCSI adapter connections of the required VIOS logical partitions.

You can define more client logical partitions.

Network addressing considerations

The following are the network address considerations:

  • Uninterrupted network connectivity is required for shared storage pool operations. The network interface that is used for the shared storage pool configuration must be on a highly reliable network, which is not congested.
  • Ensure that both the forward and reverse lookup for the host name that is used by the VIOS logical partition for clustering resolves to the same IP address.
  • With the VIOS Version 2.2.2.0, or later, clusters support Internet Protocol version 6 (IPv6) addresses. Therefore, VIOS logical partitions in a cluster can have host names that resolve to an IPv6 address.
  • To set up clusters on an IPv6 network, IPv6 stateless autoconfiguration is suggested. You can have a VIOS logical partition configured with either IPv6 static configuration or IPv6 stateless autoconfiguration. A VIOS logical partition that has both IPv6 static configuration and IPv6 stateless autoconfiguration is not supported in VIOS Version 2.2.2.0.
  • The host name of each VIOS logical partition that belongs to the same cluster must resolve to the same IP address family, which is either Internet Protocol version 4 (IPv4) or IPv6.
Restrictions:
  • In a cluster configuration, you cannot change the host name of a VIOS logical partition. To change the host name, perform the following options, as applicable:
    • If there are two or more VIOS logical partitions in the cluster, remove the VIOS logical partition from the cluster and change the host name. Subsequently, you can add the VIOS logical partition to the cluster again with the new host name.
    • If there is only one VIOS logical partition in the cluster, you must delete the cluster and change the host name. Subsequently, you can recreate the cluster.
  • You must make changes to the /etc/netsvc.conf file of the VIOS logical partition before creating the cluster. This file is used to specify the ordering of name resolution for networking routines and commands. Later, if you want to edit the /etc/netsvc.conf file, perform the following steps on each VIOS logical partition:
    1. To stop cluster services on the VIOS logical partition, type the following command:
      clstartstop -stop -n clustername -m vios_hostname
    2. Make the required changes in /etc/netsvc.conf file. Ensure that you do not change the IP address that resolves to the host name that is being used for the cluster.
    3. To restart cluster services on the VIOS logical partition, type the following command:
      clstartstop -start -n clustername -m vios_hostname
    Maintain the same ordering of name resolution for all the VIOS logical partitions that are part of the same cluster. You must not make changes to the /etc/netsvc.conf file when you are migrating a cluster from IPv4 to IPv6.

Storage provisioning

When a cluster is created, you must specify one physical volume for the repository physical volume and at least one physical volume for the storage pool physical volume. The storage pool physical volumes are used to provide storage to the actual data generated by the client partitions. The repository physical volume is used to perform cluster communication and store the cluster configuration. The maximum client storage capacity matches the total storage capacity of all storage pool physical volumes. The repository disk must have at least 1 GB of available storage space. The physical volumes in the storage pool must have at least 20 GB of available storage space in total.

Use any method that is available for the SAN vendor to create each physical volume with at least 20 GB of available storage space. Map the physical volume to the logical partition Fibre Channel adapter for each VIOS in the cluster. The physical volumes must only be mapped to the VIOS logical partitions connected to the shared storage pool.

Note: Each of the VIOS logical partitions assign hdisk names to all physical volumes available through the Fibre Channel ports, such as hdisk0 and hdisk1. The VIOS logical partition might select different hdisk numbers for the same volumes to the other VIOS logical partition in the same cluster. For example, the viosA1 VIOS logical partition can have hdisk9 assigned to a specific SAN disk, whereas the viosA2 VIOS logical partition can have the hdisk3 name assigned to that same disk. For some tasks, the unique device ID (UDID) can be used to distinguish the volumes. Use the chkdev command to obtain the UDID for each disk.

Cluster communication mode

In VIOS 2.2.3.0 or later, by default, the shared storage pool cluster is created in a unicast address mode. In earlier VIOS versions, the cluster communication mode gets created in multicast address mode. As the older cluster versions are upgraded to VIOS Version 2.2.3.0, the communication mode changes from multicast to unicast as part of rolling upgrade operation.

Failure group

Failure group refers to one or more physical disks that belong to one failure domain. When the system selects a mirrored physical partition layout, it considers the failure group as a single point of failure. For example, a failure group can represent all the disks that are the children of one particular adapter (adapterA vs adapterB), or all the disks that are present on one particular SAN (sanA vs sanB), or all the disks that are present on one particular geographic location (buildingA vs buildingB).

Shared storage pool mirroring

The data in the shared storage pool can be mirrored across multiple disks, and the pool can withstand a physical disk failure by using the disk mirrors. In the case of disk failures, SSP Mirroring provides a better reliability for the storage pool. Hence mirroring provides higher reliability and storage availability in the shared storage pool. The existing non-mirrored shared storage pool can be mirrored by providing a set of new disks that matches the capacity of the original failure group. All these new disks are a part of the new failure group.

If one or more disks or partitions of a mirrored pool fail, you would be intimated by alerts and notifications from the management console. When you get the alerts or notifications, you must replace the disk that failed with another functional disk. When the disk starts functioning again or if the disk is replaced, the resynchronization of data starts automatically.



Send feedback Rate this page

Last updated: Thu, April 05, 2018