Storage connectivity groups

Storage connectivity groups exist only in PowerVC. They are logical groupings of resources that can be used to connect to storage along with rules that specify how to connect. They are not a way of grouping storage; rather they manage your connection policies to storage. A storage connectivity group is associated with a set of Virtual I/O Servers and optionally, sets of Fibre Channel fabrics and ports that are considered storage connectivity candidates for virtual machines during deployment, migration, and when new storage is attached to a virtual machine. You must specify a storage connectivity group when you deploy a virtual machine.

All storage traffic for a virtual machine is handled by one or more Virtual I/O Servers. Storage connectivity groups can be used to isolate storage traffic for certain workloads to specific groups of Virtual I/O Servers. Storage connectivity groups can also be used to isolate the traffic to specific physical Fibre Channel ports on the Virtual I/O Servers, specific fabrics, or both, when the storage volumes are NPIV-attached from registered SAN controllers. In addition, they can be used to specify required redundancy of Virtual I/O Servers, fabrics, and host ports on a fabric in the manner that volumes are attached to virtual machines. Isolation and redundancy capabilities vary with different connectivity options (NPIV, virtual SCSI, and shared storage pool).

Notes:
  • If you back up a storage connectivity group from a previous release of PowerVC then restore it to 1.3.0 and any of the listed fabrics are no longer managed by PowerVC, the storage connectivity group will not be usable. To work around this problem, ensure that PowerVC is managing all fabrics listed in any storage connectivity group restored to PowerVC 1.3.0 or update and then restore to 1.3.x, where x > 0.
  • Storage connectivity groups are not used in the converged infrastructure technical preview.

Basic use of storage connectivity groups

The basic use of storage connectivity groups is to isolate storage traffic for certain workloads to specific groups of Virtual I/O Servers. A workload is one or more deployed virtual machines. PowerVC supports one virtual machine per workload. For example, if an organization has Production, Development, and Test workloads, they could create custom storage connectivity groups that would be used for each type of workload. Development workloads could be defined to never allow storage traffic routed through the same Virtual I/O Servers as Production workloads. Alternatively, the two workload types could be deployed with storage connectivity groups that specify shared Virtual I/O Server but different Fibre Channel ports. A storage connectivity group can be configured to allow access to registered SAN providers only, to one shared storage pool provider only, or a combination of both.

When a new virtual machine is deployed from an image, you must specify a storage connectivity group. The virtual machine will be deployed only to a host that contains at least one storage-ready Virtual I/O Server that is part of the storage connectivity group. Similarly, a virtual machine can be migrated only within the storage connectivity group that is already associated with that virtual machine. This ensures that the source and destination servers have access to the required shared storage pool or external storage controllers.

A Virtual I/O Server is storage-ready if it is ready to manage storage by meeting these requirements:
  • The Virtual I/O Server state is Running.
  • The VIOS Resource Monitoring and Control (RMC) state is Active or Busy. Many Virtual I/O Server functions depend on this service. Therefore, an Inactive state for all Virtual I/O Servers on a host will prevent deployments.

    A Busy state means that the Hardware Management Console (HMC) could not connect to the Virtual I/O Server or could not get information from the Virtual I/O Server over RMC, even if the RMC state is Active on the HMC.

    The Busy state might be a temporary condition, and it will not prevent deploys from being requested. However, a Busy state might indicate an ongoing issue for the Virtual I/O Server and it could result in the Virtual I/O Server not being used for storage attachment to the new virtual machine.

    If the Busy state continues, it can cause a deployment failure. If the RMC state is Busy, wait 15 minutes, then refresh the storage connectivity group properties. If the RMC state is still Busy, investigate the Virtual I/O Server health by using the HMC or by logging into the Virtual I/O Server.

  • One or more owned physical Fibre Channel ports are ready.
    • For NPIV connectivity, the Fibre Channel port status must be OK. For vSCSI connectivity, the status must be OK or it could be an NPIV-specific problem status, such as NPIV: Unknown Fabric or NPIV: Unsupported Adapter.
    • The port must allow connectivity. To verify or change this setting, access the PowerVC user interface and go to Configuration > Fibre Channel Port Configuration. The Connectivity column value cannot be None.
    • If specified, the port tag on the port must match the tag that is set on the storage connectivity group. If a port tag is not specified on the port, then the Fibre Channel port is considered ready if Include untagged ports is selected on the storage connectivity group.

      To change the port tag, go to Configuration > Fibre Channel Port Configuration.

    • For NPIV connectivity, the port must have a Fabric value that is contained within the fabric set associated with the storage connectivity group.
  • There are one or more applicable storage providers not in an Error state. An applicable storage provider is supported by the Virtual I/O Server and is allowed by the storage connectivity group.
Important:

A storage connectivity group can be used to deploy a virtual machine if only one Virtual I/O Server is storage-ready on a host and if the storage connectivity group does not require Virtual I/O Server redundancy. In that situation, only one Virtual I/O Server will be used for storage connectivity. However, you might have configured your cloud environment for dual VIOS on each host. In this case, ensure that the storage connectivity group being used specifies at least two Virtual I/O Servers per host and specifies At least 2 Virtual I/O Servers for VIOS Redundancy For Volume Connectivity.

The Virtual I/O Server redundancy settings in the storage connectivity group are applied per connectivity type and per volume attachment. A volume is attached to a virtual machine by using a single connectivity type. That connectivity type depends on whether it is a boot or data volume and, possibly, what type of storage provider the volume is being served from.

For example, assume that VIOS redundancy for storage is not desired and a shared storage pool is being used. Therefore, a storage connectivity group is defined for the shared storage pool that specifies a redundancy of Exactly 1 Virtual I/O Server and allows volume connections coming from a separate SAN provider. The boot volume comes from the shared storage pool and is attached through a connection on VIOS_1. Then a data volume coming from a SAN provider, such as XIV® or Hitachi, is attached. In this case, VIOS_2 is chosen for NPIV connectivity because that VIOS owns ports with the most free NPIV connections remaining. At this point, the virtual machine has storage connections through both Virtual I/O Servers, but they are not redundant connections because they are independent volumes.

Similarly, when a second data volume is attached from a SAN provider, PowerVC can choose to attach it through VIOS_1 if that now sorts first for NPIV attachment. At this point both Virtual I/O Servers are utilized for NPIV connectivity, but there is still no Virtual I/O Server redundancy per storage volume. Independently, there might be fabric redundancy or Fibre Channel path redundancy within a fabric, depending on other storage connectivity group properties and the configuration of the rest of the environment.

Using storage connectivity groups as described can allow better utilization and sharing of existing host resources. For example, you can use storage connectivity groups instead of manually trying to provision different types of workloads to different hosts.
Note: When using a storage connectivity group that specifies vSCSI attached storage, a Virtual I/O Server could contain Fibre Channel ports that are not cabled to a fabric but are still identified by WWPN in the host group entry on the back-end storage. When the vSCSI disks are discovered on the Virtual I/O Server, a warning might be generated because fewer paths than expected will be found while discovering a disk.

Basic use of storage connectivity groups - Example

This example illustrates how to use storage connectivity groups to divide your workloads into different connectivity groups, such as production, development, and test. The groups have different Virtual I/O Servers, and possibly different hosts and ports. These groups are set up to best suit your environment, but typically you will dedicate more, or faster, resources to your production environment than the others. Given this scenario, assume that you perform a large number of deployments to all three groups. Depending on your environment, your development and test deployments eventually run out of host-side resources. At that point, development and test deploys fail, but because you have more resources allocated for production workloads, all of your production deployments still succeed. However, if you did not split up your deployments between storage connectivity groups, then a production workload deploy might fail after running out of resources, while development or test deployments still succeed.

The example also demonstrates the use of development storage versus a production storage device. Storage connectivity groups can be configured with direct connectivity to SAN storage devices using NPIV or vSCSI connectivity (labeled as a Production storage device here), with connectivity to a shared storage pool (labeled as a Development storage device here), or both. The type of storage device might affect the performance of storage-intensive workloads.

By carefully allocating different amounts of CPU, memory, and other resources to your Virtual I/O Servers, you can achieve different levels of performance for storage-intensive workloads that are deployed with different storage connectivity groups that contain these Virtual I/O Servers. The deployed virtual machines must be sized appropriately as well.

The following figure shows a common configuration. In this example, a client runs development, production, and test virtual machines across two hosts and uses different Virtual I/O Server pairs for each. 

If you deploy an image and specify Development for the storage connectivity group, the virtual machine could be deployed to host 1 or host 2. It is connected to the storage in the Development shared storage pool by using VIOS 1 and 2 or VIOS 3 and 4.

If you deploy an image and specify Production for the storage connectivity group, the virtual machine could be deployed to host 1 or host 2. It is connected to the Production storage device by using VIOS 5 and 6 or VIOS 7 and 8.

If you deploy an image and specify Test for the storage connectivity group, the virtual machine could be deployed to host 1 or host 2. It is connected to the Production storage device by using VIOS 9 or 10.
This image shows the setup that is described in the previous paragraphs.

Supported storage connectivity combinations

You can use storage connectivity groups to control access to types of storage. A storage connectivity group can be configured to allow access to registered SAN providers only, to one shared storage pool provider only, or a combination of both. This table shows the connectivity options available for different types of storage. For example, if you have a boot volume connected by NPIV, the data volume must also be connected by NPIV. However, if you have a boot volume that is connected by vSCSI, then the data volume can be connected by NPIV or vSCSI.

Each virtual machine is associated with a single storage connectivity group that manages the virtual machine's connectivity to storage volumes. Each storage connectivity group supports a single connectivity type for volumes in the boot set. For example, you cannot have shared storage pool volumes and NPIV volumes in the boot set. Similarly, you cannot have both NPIV and vSCSI data volumes attached to the same virtual machine, although NPIV and shared storage pool data volumes are allowed together. Each storage connectivity group can specify at most one shared storage pool provider. Therefore, you cannot have volumes attached to your virtual machine that come from more than one shared storage pool. However, PowerVC can manage multiple shared storage pool providers and volumes can be attached from each provider to different virtual machines.

According to the following table, both NPIV and vSCSI data volumes are supported for a vSCSI boot volume, but because storage connectivity groups only support one type of connectivity, we see that if you have a vSCSI boot volume, you can have either NPIV or vSCSI data volumes, but not both.

Table 1. Supported storage connectivity options, depending on boot volume and data volume connectivity
Boot volume / Data volume Shared storage pool NPIV vSCSI
Shared storage pool X X  
NPIV   X  
vSCSI   X X
Note: Only one type of boot volume connectivity and one type of data volume connectivity is supported.

Default storage connectivity groups

PowerVC automatically defines these default storage connectivity groups. Default storage connectivity groups cannot be deleted and can only be modified in limited ways.

Note: If a shared storage pool provider is removed, then the default storage connectivity group for the provider is also automatically removed.
Any host, all VIOS
This connectivity group is used for NPIV connectivity to registered SAN controllers. It includes Virtual I/O Servers that have NPIV connected volumes in the boot set and only NPIV connected data volumes. This group can only be disabled if there is an enabled storage connectivity group that can access those storage controllers.
Any host in shared_storage_pool_provider_display_name
This connectivity group is created for each registered shared storage pool. This storage connectivity group includes all shared storage pool cluster members that have been discovered. It can only be disabled if there is an enabled storage connectivity group that can access that shared storage pool.
auto-BootType_DataType
When a pre-existing virtual machine connected by vSCSI is brought under PowerVC management, a new storage connectivity group is not immediately created for it. However, if that virtual machine is later part of a storage operation, such as attach or migration, then one of these default storage connectivity groups is created before the storage operation happens. The storage connectivity group has boot or data connectivity type properties as necessary for the virtual machine. The virtual machine's name is based on those values.
For example, a virtual machine booting from a SAN volume attached as a physical vSCSI device to the VIOS and having a SAN data volume attached by NPIV connectivity would be associated with an auto-created storage connectivity group named auto-vscsi_npiv.
Note: If the virtual machine does not have any attached volumes, the data connectivity type is set to NPIV by default.

Storage connectivity groups, fabrics, and Fibre Channel ports

Each Virtual I/O Server can be a member of multiple storage connectivity groups. Storage connectivity groups that share a Virtual I/O Server can use different physical Fibre Channel ports on that Virtual I/O Server. The following illustration shows a single host that contains two Virtual I/O Servers. Each Virtual I/O Server uses physical Fibre Channel ports 0 - 3 to connect to redundant fabrics A and B. Storage connectivity group Production uses ports fcs0 and fcs1. Storage connectivity group Development uses ports fcs2 and fcs3.
This image shows the setup that is described in the previous paragraphs.

This setup can be facilitated by using Fibre Channel port tags. These tags are strings that can be assigned to specific Fibre Channel ports across your host systems. A storage connectivity group can be configured to connect only through Fibre Channel ports with a specific tag. In the previous example, you would add the tag Production to ports fcs0 and fcs1, and you would add the tag Development to ports fcs2 and fcs3 for every Virtual I/O Server.

To use Fibre Channel port tags, follow these steps:
  1. Add tags to your Fibre Channel ports:
    1. Access the Configuration page and click Fibre Channel Port Configuration.
    2. Add tags on the Port Tag field.
  2. Associate a tag with a storage connectivity group:
    1. Access the Configuration page and click Storage Connectivity Groups.
    2. Edit or create a storage connectivity group.
    3. Select Restrict image deployments to host Fibre Channel ports tagged with, then select the tag.
-
Using fabrics with storage connectivity groups:
  • PowerVC lets you register multiple redundant or non-redundant fabrics. However, only redundant fabrics can be associated with a specific virtual machine. In that situation, it is recommended that you have a storage connectivity group that specifies the redundant set of fabrics that should be considered for volume connectivity.
  • By default, a new custom storage connectivity group will be associated with all managed fabrics that member Virtual I/O Servers are cabled to. To change this default setting, deselect Dynamically associate all fabrics that are connected to the VIOSes in this storage connectivity group and choose a static set of fabrics.
  • If you place a Virtual I/O Server in a storage connectivity group that has access to external storage controllers and you are using redundant Fibre Channel fabrics, at least one Fibre Channel port on each redundant fabric should be usable by that storage connectivity group. If only one Fibre Channel port is usable, then the resulting virtual machine will not have the benefit of redundant fabrics.
  • If a virtual machine has connectivity through a Virtual I/O Server that is connected to a set of redundant fabrics and you want to migrate this virtual machine to a new host, then the target host must have an applicable Virtual I/O Server that is also connected to the same fabric set.
  • By default, PowerVC chooses one host-side Fibre Channel port per fabric (for a given Virtual I/O Server) to connect through. If you need more port redundancy, you can create a storage connectivity group that specifies a number greater than one for For each VIOS, number of ports to connect per fabric. When using this setting, all hosts in the storage connectivity group should have sufficient ports to accommodate this requirement. Images will never be deployed to a host that does not have sufficient ports.

Working with storage connectivity groups

You might want to create more groups to allow deployments to subsets of Virtual I/O Servers. To create, view, modify, or remove storage connectivity groups, go to Configuration > Storage Connectivity Groups.

When you create a storage connectivity group, you can choose whether a new Virtual I/O Server is automatically added to the group when it is detected. If the group specifies a shared storage pool provider, the newly detected Virtual I/O Server must be a member of the shared storage pool cluster to be automatically added to the group.

Working with initiator port groups

Initiator port groups (IPGs) define the set of VIOS ports to be used for volume attachment when using NPIV storage. This feature enables different set of VIOS ports for each type of volume attachment and allows to scale the number of volumes that can be attached to a virtual machine.

Multiple IPGs can be defined per storage connectivity group. Each IPG must have ports from all the VIOS members of the storage connectivity group. One Fibre Channel port can be part of a single IPG.

When an IPG is not defined, all valid ports on a VIOS are used for all of volume attachments in a virtual machine. With IPG, a subset of ports can be selected for a given volume attachment. The ports in a given IPG should match all of the VIOS and fabric settings of the storage connectivity group. For example, if you have selected VIOS redundancy of minimum 2 for a storage connectivity group, then the IPG should have ports from 2 Virtual I/O Servers for a host.

To create an initiator port group, follow these steps:
  1. On the Create Storage Connectivity Group page, select Define groups of VIOS FC ports allowed for I/O connectivity and click Create.
  2. Specify a Name. Each storage connectivity group must have unique IPG names defined.
  3. Select ports which must be part of the IPG. Ports across all VIOSes, which are part of the SCG must be added here. During live migration, the ports within the same IPG are selected on the target host. For example, Host1 has host1_VIOS and Host2 has host2_vios added to the storage connectivity group. The IPG created should have host1_vios Fibre Channel port and host2_vios Fibre Channel port available so that Host2 will be a valid target for live migration.
Considerations and limitations
  • You cannot edit or delete an initiator port group if any virtual machines are associated with the storage connectivity group.
  • To define which IPG must be used for a volume, the IPG name must be given in the storage template. For details see, Configuring initiator port groups.
  • If you do not specify an IPG in the storage template, then PowerVC automatically selects an IPG based on the number of volumes per instance using the ports in the IPG (least utilized port).
  • When managing virtual machines that were created by using IPG, you must set the correct storage connectivity group by using the powervc-edit-scg command. Use the powervc-config storage set-template command to set the appropriate storage template for volumes attached to the virtual machines.
  • When using an IPG in a storage connectivity group, the number of volumes attached to a virtual machine can scale more than 128 volume attachments.
  • Initiator Port Group feature does not work with PowerMax storage when multiple groups of initiators are used within the same virtual machine. PowerMax can use initiator port group to isolate between different storage providers. For example, a user can choose a set of initiator ports for storage1 (PowerMax) and a different set of initiators for storage2 (FC Tape).