Host clusters

A host cluster is a group of logical host objects that can be managed together. For example, you can create a volume mapping that is shared by every host in the host cluster. The system uses internal protocols to manage access to the volumes and ensure consistency of the data. Host objects that represent hosts can be grouped in a host cluster and share access to volumes. New volumes can also be mapped to a host cluster, which simultaneously maps that volume to all hosts that are defined in the host cluster.

Each host cluster is identified by a unique name and ID, the names of the individual host objects within the cluster, and the status of the cluster. A host cluster can contain up to 128 hosts. However, a host can be a member of only one host cluster.

In the command line interface, use the lshostcluster command to display the status of the host cluster. A host cluster can have one of the following statuses:
All hosts in the host cluster are online.
Host degraded
All hosts in the host cluster are either online or degraded.
Host cluster degraded
At least one host is offline and at least one host is either online or degraded.
All hosts in the host cluster are offline (or the host cluster does not contain any hosts).

Host clusters can be assigned to an ownership group. An ownership group defines a subset of users and objects within the system. You can create ownership groups to further restrict access to specific resources that are defined in the ownership group. Only users with Security Administrator roles can configure and manage ownership groups.

Ownership can be defined explicitly or it can be inherited from the user, user group, or from other parent resources, depending on the type of resource. Host clusters can be owned if they are assigned an ownership group explicitly or by inheritance from the user creating them. New or existing hosts that are defined in a host cluster inherit the ownership group that is assigned to the host cluster.

Volume mapping

By default, hosts within a host cluster inherit all shared volume mappings from that host cluster, as if those volumes were mapped to each host in the host cluster individually. Hosts in a host cluster can also have their own private volume mappings that are not shared with other hosts in the host cluster. 

With shared mapping, volumes are mapped on a host cluster basis. The volumes are shared by all of the hosts in the host cluster, if there are no Small Computer System Interface (SCSI) LUN conflicts among the hosts. Volumes that contain data that is needed by other hosts are examples of a shared mapping.

If a SCSI LUN conflict occurs, a shared mapping is not created. SCSI LUN conflicts are caused when multiple volumes are mapped to the same SCSI LUN ID. SCSI LUN conflicts can also be caused when a single volume is mapped to multiple SCSI LUN IDs.

You can add individual volumes to the host cluster as shared mappings. When a new volume is directly mapped to a host cluster, it is automatically mapped to each of the hosts in the host cluster. The new volume is mapped with the same SCSI LUN to every host in the host cluster. For more information, see the mkvolumehostclustermap command.

With private mapping, individual volumes are directly mapped to individual hosts. These volumes are not shared with any other hosts in the host cluster. A host can maintain the private mapping of some volumes and share other volumes with hosts in the host cluster. The SAN boot volume for a host would typically be a private mapping. Before software level 7.7.1, all volume mappings were considered to be private mappings.

For example, Figure 1 shows two hosts that are members of a host cluster. The volume that contains data is a shared mapping with Host A and Host B. However, Host A and Host B also have a private mapping to their respective boot volumes (LUN A and LUN B).

Figure 1. Host cluster that contains private and shared volumes
Image shows an example of two hosts in a host cluster.

For more information about how to create a host cluster, see Creating host mappings by using the CLI.