IBM Support

Guidelines for Configuring EMC VNX Systems with SAN Volume Controller and Storwize

Preventive Service Planning


Abstract

This document details the guidelines for Configuring EMC VNX Systems with SAN Volume Controller and Storwize V7000 and V5000

Content

Configuring EMC VNX Systems



This portion of the document covers the necessary configuration for EMC VNX storage systems with an IBM SAN Volume Controller.

Note; EMC CLARiiON CX systems are also earlier model equivalents for general settings.

Support models of EMC VNX

Different models of the EMC VNX are supported for use with the IBM SVC
since SVC 6.4.x.x.

The VNX family of storage systems consists of the following models:

- VNX1: VNX5100, VNX5300, VNX5500, VNX5700, VNX7500
- VNX2: VNX5400, VNX5600, VNX5800, VNX7600, VNX8000

For support greater than 2TB EMC VNX LUNs size, support is from 7.3.0.5 onwards.
See:- http://www-01.ibm.com/support/docview.wss?uid=ssg1S1003658

Support firmware levels of EMC VNX

SVC supports, VNX OE R31 (05.31) and higher controller firmware.

VNX OE R31 (05.31) and R32 (05.32) software runs only on VNX1 (VNX5100, VNX5300, VNX5500, VNX5700, and VNX7500). VNX OE R33 (05.33) and higher software only runs on the following VNX2 models: VNX5400, VNX5600, VNX5800, VNX7600, and VNX8000.

See the following website for specific firmware levels and the supported hardware of a release: -
http://www-01.ibm.com/support/docview.wss?uid=ssg1S1005253#_EMC

Concurrent maintenance on EMC VNX

Concurrent firmware upgrades (NDU) are supported for all VNX storage systems as per EMC VNX procedures.

EMC VNX LUN user interfaces

There is the EMC Unisphere for web enabled remote management.

There is also a CLI access via NaviCLI from an installed host.


EMC VNX LUN configuration

The storage is provisioned from the VNX as LUNs, that appear as managed disks (Mdisks) on the SVC, which can use them to create storage pools (MDiskgroups) to provision SVC volumes (VDisks) for use by hosts or for use with tiering.



SVC Controller Recognition

Example of an EMC VNX controller under SVC;
# svcinfo lscontroller
id controller_name ctrl_s/n vendor_id product_id_low product_id_high
0 controller0 01543BC1B7CL DGC VRAID

Logical units and target ports on EMC VNX

EMC LUN Limits for VNX1 with VNX OE R31(05.31) and R32 (05.32)
CapacityVNX5100VNX5300VNX5500VNX5700VNX7500
Minimum 1MB1 Block1 Block1 Block1 Block1 Block
Pool LUN size
(max)
16TB16TB16TB16TB16TB
LUNs per pool
(max)
512512102420482048
Pool LUNs per
storage system
(max)
512*512*1024*2048*2048*

EMC LUN Limits for VNX2 with VNX OE R33 (05.33)
CapacityVNX5400VNX5600VNX5800VNX7600VNX8000
Minimum 1MB1 Block1 Block1 Block1 Block1 Block
Pool LUN size
(max)
256TB256TB256TB256TB256TB
LUNs per pool
(max)
10001100210030004000
Pool LUNs per
storage system
(max)
1000*1100*2100*3000*4000*
*EMC VNX can export 256 LUNs to the SVC Controller (SVC limit).


For regular RAID groups, the maximum LUN size is limited only by the size of the drives and the maximum number of drives in the group (based on the RAID level). Max RAID Group LUN can be larger than 16 TB. Larger LUNs can be configured provided host operating system can consume those LUNs.

For pools the sizes are EMC limits as above.

LUN Mapping

Mdisks greater than 2TB LUNs are supported with SVC. SVC has a maximum LUN size of 1PB.

LUN IDs

EMC VNX will identify exported Logical Units through
SCSI Identification Descriptor type 3.
The 64-bit, NAA IEEE Registered Extended Identifier (NAA=6)
for the Logical Unit is in the form;
6-OUI-VSID .
The EMC VNXIEEE Company ID is of 0x006016,
the rest is vendor specific ID.

Example of an EMC VNX LUN as SVC Mdisk;
svcinfo lsmdisk 0
id 0
name mdisk0
status online
mode unmanaged
mdisk_grp_id
mdisk_grp_name
capacity 300.0GB
quorum_index
block_size 512
controller_name controller0
ctrl_type 4
ctrl_WWNN 50060160BEA05C5F
controller_id 0
path_count 2
max_path_count 2
ctrl_LUN_# 0000000000000000
UID 6006016005e03000e4398c38d252e41100000000000000000000000000000000
preferred_WWPN 500601693EA05C5F
active_WWPN 500601693EA05C5F
fast_write_state empty
raid_status
raid_level
redundancy
strip_size
spare_goal
spare_protection_min
balanced
tier enterprise
slow_write_priority
fabric_type fc
site_id
site_name
easy_tier_load low
encrypt no




Configuring the EMC VNX for IBM SVC.

Settings will be default unless specified.

To modify, resize or delete a LUN.

The LUN must be removed from any SVC Mdisk group and be in an “unmanaged” state before any modifications on the controller.

Note: Make sure the MDisk is unmanaged (removed from any MDisk group) on the SVC Cluster before deleting the LUN on the EMC VNX storage system.

Note: Do not use array expansion on LUNs that are in use by a SAN Volume Controller cluster. For this to be recognised by SVC the mdisk must
be first made unmanaged by migrating or deleting from the MDisk group.

LUN presentation

LUNs are exported through the EMC VNX storage system’s available FC ports. SVC's ports must be registered and assigned to a VNX host and the LUNs assigned to this host.

Special LUNs

There are no special considerations to a Logical Unit numbering. LUN 0 may be exported where necessary.

Target Ports

An EMC VNX storage system can provide up to 72 Fibre Channel ports.
SVC has a limit of 16 per controller.

These will be seen as a WWNN to WWPN ratio of one to many list.

Example of a VNX controller under SVC showing 1 wwnn and 4 wwpn's;
# svcinfo lscontroller 0
id 0
controller_name controller0
WWNN 50060160BEA05C5F
mdisk_link_count 10
max_mdisk_link_count 10
degraded no
vendor_id DGC
product_id_low VRAID
product_id_high
product_revision 0532
ctrl_s/n 01543BC1B7CL
allow_quorum yes
fabric_type fc
site_id
site_name
WWPN 500601613EA05C5F
path_count 6
max_path_count 6
WWPN 500601603EA05C5F
path_count 4
max_path_count 4
WWPN 500601693EA05C5F
path_count 6
max_path_count 6
WWPN 500601683EA05C5F
path_count 4
max_path_count 4

LU access model

VNX controllers with VNX OE R31 (05.31) and R32 (05.32) software are Asymmetric Active/Active. VNX controllers with VNX OE R33 (05.33) software support Symmetric Active/Active for classic LUNs which are LUNs not bound from a Pool and does not support symmetric Active/Active for Pool LUNs. Pool LUNs are default to Asymmetric Active/Active. In all conditions, it is recommended to cross connect the ports across FC switches to avoid an outage from controller failure.
All EMC VNX controllers are equal in priority so there is no benefit to using an exclusive set for a specific LU.

LU preferred access port

There are no preferred access ports on the EMC VNX as all ports are asymmetric active/active across all controllers.

Configuration settings for EMC VNX storage system

The Initiators need to be registered;
System Name > Hosts > Initiators.

Settings;
Initiator type; CLARiiON/VNX
Failover mode; Active-Active (ALUA) failover mode 4
N.B. Legacy failover mode 2 can also be used.

Assign all the Initiators to the same SVC host definition.

The setup is straightforward, SVC will see the controller without a LUN being presented. Run svctask detectmdisk for SVC to
search the fabric. To verify, run svctask lsdiscoverystatus, this will show inactive when completed.




Switch zoning limitations for VNX

There are no zoning limitations for EMC VNX.

The EMC VNX systems present itself by default to a SAN Volume Controller system as a single WWNN system with a WWPN for each port zoned to the SAN Volume Controller. For example, if one of these storage systems has 4 ports zoned to the SAN Volume Controller, it appears as a controller with 4 WWPNs. A given logical unit (LU) must be mapped to the SAN Volume Controller through all controller ports zoned to the SAN Volume Controller using the same logical unit number (LUN).

The controller must be separately zoned to SVC ports for exclusivity requirements.


Fabric zoning

Dual Fabrics for robustness is the minimum recommended requirement.

Controllers should be zoned individually. SVC must have unique access to the LUNs.

The EMC Controller fabric zone will contain all the SVC ports and at least one
port from each of the EMC VNX storage systems controllers, for robustness.

Target port sharing

The EMC VNX storage system supports LUN masking to enable multiple servers to access separate LUNs through a common controller port.
Explicit masking is to be used with SVC, to guarantee the necessary unique LUN access.


Host splitting

Host splitting can be used (hosts connecting to SVC and directly to the Controller), as however SVC must have sole access to its LUNs. There may be MPIO interaction considerations, see the SVC support website for details of supported host multipathing.


Sharing the EMC VNX between a host and the SAN Volume Controller.

The sharing of the VNX controller is supported. SVC does not need exclusive access to the EMC VNX system.

Explicit mapping is required for LUNs exclusive use by SVC.

Note; SVC will not be aware of any other use of the controller, so performance may be affected.

Quorum disks on EMC VNX

The SAN Volume Controller system can select MDisks that are presented by the EMC VNX storage system as quorum disks. To maintain availability with the cluster, ideally each quorum disk should reside on a separate disk subsystem.

Clearing SCSI reservations and registrations

This should never be done; LUNs must be exclusive to SVC.

Advanced Functions of the EMC VNX storage system

Copy Functions
The EMC VNX's replication and snapshot features are not supported with IBM SVC.

Pools
Storage Pools are supported.

Thin Provisioning (Oversubscribing)
Thin LUNs are supported. Care must be taken for the LUNs not to become over allocated or SVC will take the MDisk and its MDiskgroup offline, until corrected.

Array increase capacity

The array increase capacity option is supported, but the new capacity is not usable until the MDisk is removed from the SVC storage pool and re-added to the SVC storage pool. You might have to migrate data to increase the capacity.

Deduplication


This can be used but SVC will have no view of this and it may not be as expected.



Storage processor Cache settings;

For VNX Arrays with VNX OE R33 (05.33), Storage Processor memory configuration is not required. Memory allocation amounts and cache page size are not configurable parameters.

For VNX Arrays with VNX OE R31 (05.31) and R32 (05.32), Follow EMC's best practises and general recommendations:-
It is always more important to maximize the storage system’s write cache size.

· Allocate system memory first, then allocate the rest of the memory to the read/write cache.


· It is advisable to set read cache to roughly 10 percent of available cache; 200 MB is the recommended minimum and 1024 MB is the recommended maximum. However, it is always important to maximize the storage systems write cache size.
· If performance is suffering from a small write cache, lower the write cache watermarks to free-up cache pages more quickly to make-up for a small write cache. Allocate all of the available memory to be used as write cache, which means operate without read cache.
· Not having a read cache allocation will adversely affect the storage systems sequential read performance, but having a larger write cache will have a more positive effect on overall system performance.
· Note that write cache allocation applies to both SPs; it is mirrored. Read cache is not mirrored.
· Because of write cache mirroring, each of the two SPs receives the same allocation taken from memory. For example, if you allocate 1 GB to write cache, 2 GB of the available system memory is consumed.
· When you allocate memory for read cache, you are only allocating it on one SP. SPs always have the same amount of write cache, but they may have different amounts of read cache.
· To use the available SP memory most efficiently, ensure the same amount of read cache is allocated to both storage processors (SPA and SPB).
· Very few workloads benefit from very large read caches. Read cache facilitates pre-fetching and does not have to be large. Increase read cache above recommended value only if known to have multiple read-intensive applications.

Note in order to make changes to the cache, cache will be disabled. LUNs are still accessible however performance is degraded. Schedule this appropriately.

[{"Product":{"code":"STPVGU","label":"SAN Volume Controller"},"Business Unit":{"code":"BU058","label":"IBM Infrastructure w\/TPS"},"Component":"7.5","Platform":[{"code":"PF025","label":"Platform Independent"}],"Version":"7.1;7.2;7.3;7.4;7.5","Edition":"","Line of Business":{"code":"LOB26","label":"Storage"}}]

Document Information

Modified date:
17 June 2018

UID

ssg1S1005317