Configuring the system to create shared storage pools

Learn about configuring the system to create Virtual I/O Server (VIOS) shared storage pools.

Before creating shared storage pools, ensure that all logical partitions are preconfigured by using the Hardware Management Console (HMC) as described in this topic. The following are the supported number of characters for the names:
  • Cluster: 63
  • Storage pool: 127
  • Failure group: 63
  • Logical unit: 127

Configuring the VIOS logical partitions

Configure 16 VIOS logical partitions with the following characteristics:
  • There must be at least one CPU and one physical CPU of entitlement.
  • The logical partitions must be configured as VIOS logical partitions.
  • The logical partitions must consist of at least 4 GB of memory.
  • The logical partitions must consist of at least one physical Fibre Channel adapter.
  • The rootvg device for a VIOS logical partition cannot be included in storage pool provisioning.
  • The associated rootvg device must be installed with VIOS Version 2.2.2.0, or later.
  • The VIOS logical partition must be configured with sufficient number of virtual server Small Computer Serial Interface (SCSI) adapter connections required for the client logical partitions.
  • The VIOS logical partitions in the cluster require access to all the SAN-based physical volumes in the shared storage pool of the cluster.
One VIOS logical partition must have a network connection either through an Integrated Virtual Ethernet adapter or through a physical adapter. On VIOS Version 2.2.2.0, clusters support virtual local area network (VLAN) tagging.
Note: In shared storage pools, the Shared Ethernet Adapter must be in threaded mode. For more information, see Network attributes.
Restriction: You cannot use the logical units in a cluster as paging devices for PowerVM® Active Memory™ Sharing or Suspend/Resume features.

Configuring client logical partitions

Configure client logical partitions with the following characteristics:

  • The client logical partitions must be configured as AIX® or Linux client systems.
  • They must have at least 1 GB of minimum memory.
  • The associated rootvg device must be installed with the appropriate AIX or Linux system software.
  • Each client logical partition must be configured with a sufficient number of virtual SCSI adapter connections to map to the virtual server SCSI adapter connections of the required VIOS logical partitions.

You can define more client logical partitions.

Storage provisioning

When a cluster is created, you must specify one physical volume for the repository physical volume and at least one physical volume for the storage pool physical volume. The storage pool physical volumes are used to provide storage to the actual data generated by the client partitions. The repository physical volume is used to perform cluster communication and store the cluster configuration. The maximum client storage capacity matches the total storage capacity of all storage pool physical volumes. The repository disk must have at least 1 GB of available storage space. The physical volumes in the storage pool must have at least 20 GB of available storage space in total.

Use any method that is available for the SAN vendor to create each physical volume with at least 20 GB of available storage space. Map the physical volume to the logical partition Fibre Channel adapter for each VIOS in the cluster. The physical volumes must only be mapped to the VIOS logical partitions connected to the shared storage pool.

Note: Each of the VIOS logical partitions assign hdisk names to all physical volumes available through the Fibre Channel ports, such as hdisk0 and hdisk1. The VIOS logical partition might select different hdisk numbers for the same volumes to the other VIOS logical partition in the same cluster. For example, the viosA1 VIOS logical partition can have hdisk9 assigned to a specific SAN disk, whereas the viosA2 VIOS logical partition can have the hdisk3 name assigned to that same disk. For some tasks, the unique device ID (UDID) can be used to distinguish the volumes. Use the chkdev command to obtain the UDID for each disk.

Cluster communication mode

In VIOS 2.2.3.0 or later, by default, the shared storage pool cluster is created in a unicast address mode. In earlier VIOS versions, the cluster communication mode is created in a multicast address mode. When the cluster versions are upgraded to VIOS Version 2.2.3.0, the communication mode changes from multicast address mode to unicast address mode as part of rolling upgrade operation.

Failure group

Failure group refers to one or more physical disks that belong to one failure domain. When the system selects a mirrored physical partition layout, it considers the failure group as a single point of failure. For example, a failure group can represent all the disks that are the children of one particular adapter (adapterA versus adapterB), or all the disks that are present on one particular SAN (sanA versus sanB), or all the disks that are present in one particular geographic location (buildingA versus buildingB).

Shared storage pool mirroring

The data in the shared storage pool can be mirrored across multiple disks within a tier. In other words, it cannot be mirrored across tiers. The pool can withstand a physical disk failure by using the disk mirrors. During disk failures, SSP mirroring provides a better reliability for the storage pool. Therefore, mirroring provides higher reliability and storage availability in the shared storage pool. The existing non-mirrored shared storage pool can be mirrored by providing a set of new disks that matches the capacity of the original failure group. All new disks belong to the new failure group.

If one or more disks or partitions of a mirrored pool fail, you will receive alerts and notifications from the management console. When you receive alerts or notifications, you must replace the disk that failed with another functional disk. When the disk functions again or if the disk is replaced, the data is resynchronized automatically.




Last updated: Thu, October 15, 2020