Planning for storage

IBM® Power® Virtualization Center manages storage volumes, which can be attached to virtual servers. PowerVC uses the term storage provider for any system that provides storage volumes. These storage volumes can be integrated devices or pluggable devices.

For supported storage devices, see the storage drivers section in this topic: Hardware and software requirements.

Possible storage configurations

PowerVC can manage any combination of supported devices. In this example, there are two storage providers; an external storage device, and a shared storage pool.
This image shows two hosts, each having several virtual machines. There is a shared storage pool and a storage device. Some of the virtual machines on each host connect to the shared storage pool. Some of the virtual machines on each host connect to the storage device.
It is important that your virtual machines can access the necessary storage. Additionally, when a virtual server is deployed, you can only deploy to a host system that can connect to the storage on which the deployment image resides.

Supported storage connectivity combinations

You can use storage connectivity groups to control access to types of storage. A storage connectivity group can be configured to allow access to registered SAN providers only, to one shared storage pool provider only, or a combination of both. This table shows the connectivity options available for different types of storage. For example, if you have a boot volume connected by NPIV, the data volume must also be connected by NPIV. However, if you have a boot volume that is connected by vSCSI, then the data volume can be connected by NPIV or vSCSI.

Each virtual machine is associated with a single storage connectivity group that manages the virtual machine's connectivity to storage volumes. Each storage connectivity group supports a single connectivity type for volumes in the boot set. For example, you cannot have shared storage pool volumes and NPIV volumes in the boot set. Similarly, you cannot have both NPIV and vSCSI data volumes attached to the same virtual machine, although NPIV and shared storage pool data volumes are allowed together. Each storage connectivity group can specify at most one shared storage pool provider. Therefore, you cannot have volumes attached to your virtual machine that come from more than one shared storage pool. However, PowerVC can manage multiple shared storage pool providers and volumes can be attached from each provider to different virtual machines.

According to the following table, both NPIV and vSCSI data volumes are supported for a vSCSI boot volume, but because storage connectivity groups only support one type of connectivity, we see that if you have a vSCSI boot volume, you can have either NPIV or vSCSI data volumes, but not both.

Table 1. Supported storage connectivity options, depending on boot volume and data volume connectivity
Boot volume / Data volume Shared storage pool NPIV vSCSI
Shared storage pool X X  
NPIV   X  
vSCSI   X X
Note: Only one type of boot volume connectivity and one type of data volume connectivity is supported.

Virtual Fibre Channel connected storage

Direct connectivity from a virtual machine to storage controllers is supported by using Virtual Fibre Channel technology. Virtual Fibre Channel requires an NPIV-capable adapter and requires that PowerVC can access the storage controller and any required Fibre Channel fabric switches to create and attach storage volumes.

Brocade fabrics

Brocade fabrics and virtual fabrics can be registered with PowerVC. Once these fabrics are registered, virtual machines that use these fabrics are automatically zoned to a virtual machine when a volume is attached to virtual machine. When the last volume from a storage device is detached, the zones are automatically removed from the virtual machine. For details, see Working with Brocade Virtual Fabrics.

vSCSI connected storage

PowerVC supports virtual SCSI (vSCSI) connected storage.
Notes:
  • IBM i hosts on IBM XIV® storage systems must be attached by vSCSI due to IBM i and IBM XIV storage limitations.
  • For EMC VNX and VMAX storage, IBM i hosts on EMC VNX and VMAX storage systems must be attached by vSCSI due to IBM i and EMC storage limitations.
Ensure that all patches are applied before using vSCSI connected storage.

vSCSI and host map entries

For any given Virtual I/O Server, only a single host map entry on the storage device is supported. These are the ways host map entries can be used with vSCSI connected storage:
  • Use the existing host entry on each Virtual I/O Server. PowerVC only needs to match one WWPN. PowerVC does not modify the entry on the storage device.
  • Use a PowerVC-created host entry. If no host entry exists, PowerVC creates one that contains all WWPNs that are marked as vSCSI capable. PowerVC does not modify this entry after creation.

PowerVC does not modify any host entries on storage devices found during attach. If any updates to the member WWPNs are needed, those updates must be done manually. PowerVC does not take ownership of the host entries on the storage device because the host entries belong to a Virtual I/O Server and might be in use by that Virtual I/O Server.

Pluggable storage devices and fabrics

A pluggable storage device is an OpenStack supported storage device. You can register pluggable storage and fabrics with PowerVC. Any storage device supported by an OpenStack driver can be registered, but the level of functionality that it has within PowerVC depends on the driver. PowerVC does not prevent utilization of the functions provided by the driver, such as deploy, attach and detach volume, create and delete volume, delete virtual machine, and migration. However, functionality is not guaranteed. Additionally, when viewing the details of a pluggable device, some information will be missing if the information is not available to PowerVC. Pluggable storage devices cannot be used to manage existing volumes.

To register or edit these devices, use the Storage controller registration or SAN fabric registration REST API or powervc-register CLI command. A storage template is created for each registered pluggable storage device. To edit or create additional storage templates for pluggable storage devices, use the volume type REST APIs. After registration, work with the device the same as any other device, as support allows.
Notes:
  • PowerVC does not support a mix of pluggable fabrics and fabrics for which there is integrated support.
  • PowerVC includes the Cinder drivers that have PowerVC integrated support. The Cinder volume drivers that have no integrated support must be loaded from the same stable Cinder release on which PowerVC is based. Refer to the OpenStack volume drivers page to obtain drivers and information.
  • Pluggable storage drivers do not support vSCSI attachments.

For information about using pluggable storage, see Managing pluggable storage or fabrics.

Shared storage pools

Shared storage pools allow a single physical volume on any supported storage controller to be shared across a cluster of Virtual I/O Servers. Those Virtual I/O Servers share access to aggregated physical volumes and divide that aggregated volume into storage volumes instead of interacting with external storage controllers or fabric switches.

A cluster consists of up to 16 Virtual I/O Servers with a shared storage pool that provides distributed storage access to the Virtual I/O Servers in the cluster. Each cluster requires one physical volume for the repository physical volume and at least one physical volume for the storage pool physical volume. The shared storage pool can be accessed by all the Virtual I/O Servers in the cluster. All the Virtual I/O Servers within a cluster must have access to all the physical volumes in a shared storage pool.

Important: To use a shared storage pool with PowerVC, you must create it externally before registering the host. Then when the host is added to PowerVC, the shared storage pool is added automatically. Maintenance tasks, such as adding a physical volume, backup, and so on, must be done outside of PowerVC.

When deploying a thick image backed by a shared storage pool, the Virtual I/O Server makes a full copy of the image. The location of the copy depends on the workload on the Virtual I/O Server and how your environment is set up. The time it takes to make this full copy depends on the speed of the backing Fibre Channel device, the speed of the Fibre Channel network, the resources allocated to the Virtual I/O Server and the current workload on the Virtual I/O Server. This copy operation has lower priority on the Virtual I/O Server than I/O requests. Therefore the copy operation could take up to one or more minutes per GB to complete on a heavily loaded and undersized Virtual I/O Server.

  • Before you create a shared storage pool, see the topic Configuring the system to create shared storage pools for setup information.
  • PowerVC clones volumes to the default tier within a shared storage pool. When using multiple tiers in a shared storage pools, the default tier must be set to the same tier as the source volume. Otherwise, you cannot capture virtual machine images or deploy virtual machines.
  • For more background and setup information about shared storage pools, see the shared storage pool chapter in IBM PowerVM® Enhancements SG24-8198.
  • When you are ready to create a shared storage pool, follow the instructions in this topic: Creating shared storage pools.

Snapshots, consistency groups, and consistency group snapshots

A snapshot allows for a full copy of a volume at a point in time to be created. Consistency groups allow the volume driver to snapshot a group of volumes at one point in time. This allows you to produce a set of volumes with consistent data in case of a crash. The volumes in a consistency group must all reside on the same backend storage controller, but the volumes can be attached to different virtual machines. Snapshots and consistency groups are enabled on SVC storage devices by using OpenStack APIs. Consistency groups have been deprecated and support will be removed in a future release. Generic groups should be used instead.

Both snapshots and consistency groups are designed to be used by a higher level orchestration engine or by the administrator. They enable the creation of a point in time copy of data volumes that the administrator or the orchestration engine can use to restore application data to a point in time.

For instructions to create and use snapshots and consistency groups, see Working with snapshots and groups .

For details about the relevant APIs, see Block storage (Cinder) APIs.

Generic groups and generic group snapshots

A snapshot allows for a full copy of a volume at a point in time to be created. Generic groups allow the volume driver to snapshot a group of volumes at one point in time. This allows you to produce a set of volumes with consistent data in case of a crash. The volumes in a consistency group must all reside on the same backend storage controller, but the volumes can be attached to different virtual machines. Snapshots and consistency groups are enabled by using OpenStack APIs.

Both snapshots and generic groups are designed to be used by a higher level orchestration engine or by the administrator. They enable the creation of a point in time copy of data volumes that the administrator or the orchestration engine can use to restore application data to a point in time.

For instructions to create and use snapshots and groups, see Working with snapshots and groups.

For details about the relevant APIs, see Supported OpenStack Block storage (Cinder) APIs.

Storage connectivity groups

Storage connectivity groups exist only in PowerVC. They are logical groupings of resources that can be used to connect to storage along with rules that specify how to connect. They are not a way of grouping storage; rather they manage your connection policies to storage. A storage connectivity group is associated with a set of Virtual I/O Servers and optionally, sets of Fibre Channel fabrics and ports that are considered storage connectivity candidates for virtual machines during deployment, migration, and when new storage is attached to a virtual machine. You must specify a storage connectivity group when you deploy a virtual machine.

When a new virtual machine is deployed from an image, it will be deployed only to a host that contains at least one VIOS that is part of the specified storage connectivity group. Similarly, a virtual machine can be migrated only within the specified storage connectivity group. This requirement ensures that the source and destination servers have access to the required shared storage pools or external storage controllers. For more information, see the Storage connectivity groups topic.

Specifying Fibre Channel ports

Storage connectivity groups that share a VIOS can use different physical Fibre Channel ports on that VIOS. The PowerVC administrator achieves this by assigning storage port tags to physical Fibre Channel ports on the wanted Virtual I/O Servers. These tags are strings that can be assigned to specific Fibre Channel ports across your host systems. A storage connectivity group can be configured to connect only through Fibre Channel ports that have a specific tag when deploying with NPIV Virtual Fibre Channel connectivity. For more information, see the Storage connectivity groups topic.

Creating and managing storage volumes

When you register a storage provider with PowerVC, a default storage template is created for that provider, although additional storage templates can be created for each provider. Storage templates let you specify properties of a storage volume, such as the storage provider and provisioning method.

When you create a new storage volume, you must select a storage template. All of the properties that are specified in the storage template are applied to the new volume. The new volume is created on the storage provider that is specified in the storage template. A storage template must also be specified when deploying a new virtual server to control the properties of the virtual server's boot volume.

Notes:
  • When adding an IBM Storwize® V7000 that is virtualized by a SAN Volume Controller (SVC), the internal object name does not support double-byte characters. The valid characters are: alphanumeric characters, underscore (_), period (.), hyphen (-), and space. However, PowerVC accepts double-byte characters when naming these volumes. Any double-byte characters that you use in the volume name for PowerVC are changed to underscores in the internal name.
  • When attaching a volume to a virtual machine that has boot or data volumes that are attached with an NPIV connection, the virtual machine health must be OK and the RMC state of the virtual machine must be active. If the heath is not OK or the RMC state is not active, the volume attach will fail.
  • A single instance of an IBM XIV storage system can attach up to 511 volumes per host mapping.

  • When creating an XIV storage volume that is smaller than 17 GB, a 17 GB volume is created in XIV. This volume is not shown as 17 GB unless it is unmanaged and then managed again.

PowerVC can manage pre-existing storage volumes. You can select them when you register the storage device or at any later time. Pre-existing storage volumes do not have an associated storage template.

For optimal performance, it is recommended that you use PowerVC to manage at most 20,000 storage volumes, with at most 128 volumes per virtual machine. When using an IPG in a storage connectivity group, the number of volumes attached to a virtual machine can scale more than 128 volume attachments. Refer to this topic for information on limits and restrictions for supported storage providers: Planning for storage providers. To overcome limits for a storage provider, use multiple storage providers.

Setting up volume mirroring on the IBM Storwize family of controllers

The IBM Storwize family of controllers support creating a local volume mirror within a second pool on the same storage controller or a mirror on a stretched cluster partner. The volume mirror allows a volume to remain accessible even when a managed disk (MDisk) that the volume depends on becomes unavailable. To create a volume with a volume mirror, follow these steps:
  1. Create a storage template that specifies a pool to create the mirror in.
  2. Create a volume and use that storage template.
If the volume is removed, the mirror is removed. To determine which controllers are in the IBM Storwize family, see IBM Storwize family. and IBM Storwize V3500.