Planning for EMC PowerMax storage
PowerVC supports the Dell EMC PowerMax (VMAX) storage driver that is managed through Unisphere (formerly REST) instead of an SMI-S Provider. The SMI-S based driver is not supported.
- PowerMax storage provider management is done with a Unisphere IP address or host name and credentials.
- PowerMax is essentially a rebranding of the VMAX product line. In this document, the term VMAX applies to both VMAX and PowerMax arrays. In the cinder driver documentation, the term PowerMax likewise applies to both.
- If you upgrade to the latest version of PowerVC with an existing VMAX storage provider
that was managed with an SMI-S Provider IP address or host name and credentials, then the storage
provider transitions to
Errorstate because it is no longer a supported configuration. In the user interface, you can edit the storage provider and specify the managing Uniphsere IP address and credentials. The port defaults to 8443. After changing the provider's registration details, it switches to the supported Unisphere managed and, after a few minutes, transitions to
- The Unisphere based storage provider attaches volumes to the virtual machines with a different masking view structure (using cascaded storage groups) than the SMI-S based provider used. Existing virtual servers that were deployed on the previous SMI-S based provider continues to function. However, for full capability, you must consolidate backend masking views. For instructions, see Converting legacy masking views and storage groups.
- It is mandatory to use port groups with no over lapping ports on the PowerMax array. Having same Director ports in more than one port group might cause issues when PowerVC adds or removes zones on the fabric.
- Before using PowerMax storage
- Support requirements
- Considerations for using PowerMax storage
- Converting legacy masking views and storage groups
Before using PowerMax storageFollow these steps before using PowerMax storage.
|Ensure that all host requirements are met.||See Hardware and operating system requirements for PowerVC hosts.
Verify that your PowerVM® host and Fibre Channel card microcode are at the current levels, and update them if necessary.
If vSCSI connectivity is used to VMAX storage, review
|Ensure that your Symmetrix VMAX array is at a supported level.||Supported models:
|Ensure that the eLicensing and Unisphere requirements are met||
For more information about the EMC VMAX driver support requirements and restrictions, see the Dell EMC VMAX iSCSI and FC drivers documentation. PowerVC only uses the Fibre Channel driver.
See the "System requirements and licensing" sections and its subsections in the above link to ensure your environment meets the requirements for the driver.
|Create one or more port groups if port groups are not yet defined.||The port groups must contain the VMAX target ports (also called front-end
Director ports). You can use UniSphere for VMAX management or symcli to perform this task. For the
commands to use, see EMC Solutions Enabler Symmetrix CLI on the EMC Online Support website. This
website requires an EMC account.
Only one port group is required. However, if you have several VMAX ports on each fabric, splitting the VMAX ports between two or more port groups allows better load balancing and scale in a large cloud provisioning environment.
Port groups configured for PowerVC use must have ports that are cabled to one or more Fibre Channel fabrics that are connected to your hosts.
|Ensure you understand all support requirements.||See the following section.|
Support requirementsYou must be aware of the following requirements when using PowerMax storage with PowerVC:
- For NPIV connectivity using EMC PowerPath installed on the
virtual machine, the following apply:
- PowerVC can capture and deploy AIX® 6.1 or 7.x virtual machines with EMC PowerPath enabled for the boot disks. PowerPath driver version 5.7 SP2 is required.
- For AIX 7.1 virtual machines that use EMC PowerPath, deploys must use compute templates that specify at least 2 GB of memory.
- IBM® i virtual machines are not supported for NPIV connectivity.
- For vSCSI attachments, the supported multipathing software driver solutions are AIX path control module (PCM) and EMC PowerPath.
- Migration is supported only between Virtual I/O Servers that use the same multipathing software driver solution.
- PowerVC supports only secure communication to the Unisphere/REST server (HTTPS). You cannot register VMAX providers on HTTP-only ports.
Unisphere for PowerMax 9.2.1 series is required to run Dell EMC PowerMax Cinder driver for the Wallaby release.
Considerations for using PowerMax storageBe aware of the following considerations when using PowerMax storage with PowerVC:
- When attaching volumes out-of-band
(outside of PowerVC) to a Virtual I/O Server where you are also using vSCSI connections to a
virtual machine, volumes that are attached directly to the Virtual I/O Server must be on Fibre Channel ports that PowerVC is not using for vSCSI connectivity.
For example, assume the Virtual I/O Server is getting its rootvg disk from a managed VMAX array. The rootvg volume should be masked by using Virtual I/O Server ports that are dedicated to this purpose, so that the VMAX driver in PowerVC does not mistakenly remove the mapping when a virtual machine volume is detached.The Fibre Channel ports that are dedicated for Virtual I/O Server vSCSI use outside of PowerVC must have Connectivity set to NPIV or NONE on the Fibre Channel Port Configuration in PowerVC.
- The VMAX driver lets you provision volumes to oversubscribed pools. The amount of oversubscription allowed is limited by the pool configuration on the VMAX. It is not limited by the Cinder max_over_subscription_ratio configuration option because PowerVC does not support pool scheduling where the option is used. The Available Capacity reported by the VMAX provider is the raw capacity available. It is not the unsubscribed capacity that would be left if the thin volumes were fully utilizing their allocated sizes.
- Port group considerations
Port groups are named groupings of storage ports defined on the array.
- Multiple PowerVC instances cannot manage the same storage and hosts if they have different storage port group configurations.
- If the port group configuration is changed for the provider after deploying to a host with a vSCSI attached volume, subsequent volume attachments use the port group that is chosen for the initial deployment, whether it was for the same virtual machine or for a different virtual machine that uses the same VIOS. If that port group is not valid, an error is logged and the operation attempts to continue. If the port group does not have valid storage ports with connectivity to the virtual machine, then the deploy, attach, migrate, or remote restart operation fails.
- For every attach operation, a VMAX port group is chosen in order to load balance between sets of front-end VMAX ports. See the information about port groups in Table 1. You might have multiple port groups if a significant number of VMAX front-end ports are cabled to your Fibre Channel fabrics.
- You can choose to select a list of port groups on the storage template. When a virtual machine is deployed using this storage template, the port group with least number of masking views will be used to form the masking view. This only applies to the first volume of a specific connectivity type (NPIV, vSCSI) being attached to a virtual server (NPIV) or VIOS (vSCSI) as subsequent volumes will get assigned to existing masking views.
- Port group configuration
The term port group configuration refers to the list of port groups associated with a registered VMAX provider. This is the list of port groups that the storage provider can choose from when mapping a volume to a virtual machine.
Note: A front-end Director port on the VMAX can be a member of multiple port groups.
- A port group configuration is global for a storage provider.
- For new registrations, the auto-configuration property is set to
true. However, it has no effect if no fabrics are registered. When this property is
true, each time the provider service restarts, including when fabrics are managed or unmanged, PowerVC evaluates the port groups defined on the array and filters out any that do not contain any ports that are visible on a registered fabric.
- For VMAX providers that were registered in a PowerVC version prior to 1.4.1, the initial port
group configuration consists of all possible storage port groups that were defined on the backend
array at the time of registration. The auto-configuration property is not set to
true. Therefore, port groups that might be invalid for host connectivity are not removed from the list.
- You can manually enable or disable the auto-configuration property for any VMAX provider by
powervc-config storage portgroup --arrayid <vmax_serial> --auto-config true or false --restart.
- To manually restrict PowerVC
from using certain port groups or to add port groups to PowerVC, use the powervc-config storage
portgroup command. Most of the command's options either require
false, or the command automatically sets
falseso that your port group configuration changes are not overwritten at the next service restart.
For detailed instructions to use this command, run powervc-config storage portgroup --help.
- Volume considerations
- The VMAX volume driver supports volume replication from the
registered provider array to a target failover array. The SRDF replication group and other required
resources must be configured outside of PowerVC. To enable a VMAX storage provider to use the SRDF group, run the
powervc-config storage replicationcommand. For instructions to use this command, run
powervc-config storage replication -h.
If you enable SRDF replication support and later disable the support, you might not be allowed to perform storage operations on the volumes that were created with a replication-enabled storage template while replication was enabled.
For more information about the Dell EMC support, see the Volume replication support section on the page
Dell EMC VMAX iSCSI and FC driverspage. You can also refer to the VMAX3 Volume Replication blog.
- Only a single SRP (Storage Resource Pool) is supported on VMAX/PowerMax by the EMC driver. This is the pool selected when initially managing the VMAX/PowerMax array.
- VMAX volumes have device IDs that are unique hexadecimal strings, such as
0B9A2. This value is listed on the PowerVC volume details page as the Storage provider volume name ID. When you choose to manage existing volumes from a VMAX provider, the volume names might be prefixed by this device ID. For example,
0B9A2+My Volume Name. This correlation is helpful for storage administrators.
- When creating a new volume on a VMAX storage provider, the block device
driver supports only the following characters in the name:
0-9 A-Z a-z (space) ! # , - . @ _
The PowerVC user interface and APIs accept other characters in volume names, including double-byte characters. However, any characters not in the above list are changed to underscores for the storage provider volume name.
- The VMAX volume driver supports volume replication from the registered provider array to a target failover array. The SRDF replication group and other required resources must be configured outside of PowerVC. To enable a VMAX storage provider to use the SRDF group, run the
- Deployed or captured volume considerations
If you use a dot (.) in the first 14 characters of the display name of a virtual machine, the information before the dot must be unique to guarantee successful deployment.
- Storage template setting considerations
- Service Level: The service level on the VMAX defines quality of service
metrics for the workload, such as response time performance targets. See the VMAX white paper
EMC VMAX3 FAMILY NEW FEATURES OVERVIEWfor details. The default is None. If you specify a Service Level other than None, then there might be additional considerations. Service levels are more influential on older hybrid array types that have a combination of flash and spinning disk.
- Workload: Not every service level and Workload combination is supported for
every PowerMax array. It depends on configuration and licensing. There are additional considerations
depending on your microcode level. For example, microcode 5978 does not support Workload levels like
5977 did, so if you have that level or later, choose
- Service Level: The service level on the VMAX defines quality of service metrics for the workload, such as response time performance targets. See the VMAX white paper
Converting legacy masking views and storage groups
python /usr/lib/python2.7/site-packages/powervc_cinder/cmd/samples/vmax_util.py -s 000196800573 migrate-cascaded -h
This migration can happen when the virtual machine is in Active state.The script has the following possible input:
- Show the help.
- -i VM_ID [VM_ID --vms VM_ID [VM_ID ...]
- IDs of one or more virtual servers to migrate. This option cannot be used with -l.
- Output the list of virtual servers that have legacy storage groups in their masking views. This option cannot be used with -i.
- Continue with the operation even if a validation failure is encountered.
Example usage:Running the following script returns a list of virtual servers that could be migrated:
python /usr/lib/python3.6/site-packages/powervc_cinder/cmd/samples/vmax_util.py -s 000196800573 migrate-cascaded -l
python /usr/lib/python3.6/site-packages/powervc_cinder/cmd/samples/vmax_util.py -s 000196800573 migrate-cascaded -i 7b4b608b-aea3-4a4b-a878-ab9d73e6e781