IBM Support

V7.6 Configuration Limits and Restrictions for IBM Storwize V5000

Preventive Service Planning


Abstract

This document lists the configuration limits and restrictions specific to IBM Storwize V5000, V5010, V5020 and V5030 software version 7.6.x

Content

V5010, V5020 and V5030 are supported at version 7.6.1.0 and higher

The use of WAN optimization devices such as Riverbed is not supported in native Ethernet IP partnership configurations containing Storwize V5000.

DRAID Strip Size
For candidate drives, with a capacity greater than 4TB, a strip size of 128 cannot be specified for either RAID-5 or RAID-6 DRAID arrays. For these drives a strip size of 256 should be used.

Quorum restriction with version 7.6.1.0
Customers are advised not to force stop and then immediately restart multiple quorum applications in quick succession (e.g. with CTRL+C followed by starting the applications again), especially when using three or more quorum applications. This action may cause the Storwize cluster nodes to restart. Customers are advised to wait a minimum of one minute between restarting each quorum application instance.

This restriction will be removed in a future release of V5000 Software.

Applying drive software update to all drives in a system with a distributed array containing more than 16 drives

Please do not update drive software on all drives together in a system that has a distributed array with more than 16 drives.

The following will perform a drive software upgrade on all drives together and are restricted in this case:
- Using command applydrivesoftware with "-all" parameter.
- Using the "Upgrade All" option in the GUI under "Pools > Internal Storage > Actions".

To workaround this restriction, please perform drive software update on one drive at a time.
You can do this using command applydrivesoftware with "-drive" parameter for each drive.

This restriction is now resolved with software version of 7.6.0.4

iSCSI Limitations

iSCSI host attachment is currently not supported with Software Version 7.6

This restriction is now resolved and iSCSI host attachment is supported with software version 7.6.0.2 and higher.

Due to the requirement for multiple access IO groups, iSCSI attached host types are not supported with HyperSwap volumes.

HyperSwap

HyperSwap is not supported with Software Version version 7.6.0

This restriction is now resolved and HyperSwap is supported with software version 7.6.0.1 and higher.

When using the HyperSwap function with software version 7.6.0.1 and higher, please configure your host multipath driver to use an ALUA-based path policy.

Due to the requirement for multiple access IO groups, iSCSI and SAS attached host types are not supported with HyperSwap volumes.

AIX Live Partition Mobility (LPM)

AIX LPM is now supported with the HyperSwap function and AIX 7.x

New CLI Commands

The following new CLI commands are not currently supported on Software Version 7.6

* addvolumecopy
* rmvolumecopy

Attempting to use a feature which is not supported will return the following error:
CMMVC7205E The command failed because it is not supported.

Support for these commands will be enabled in a future software release.


Clustered Systems

A Storwize V5000 system at version 7.6 and higher requires native Fibre Channel SAN or alternatively 8Gbps/16Gbps Direct Attach Fibre Channel connectivity for communication between all nodes in the local cluster. Fibre Channel over Ethernet ( FCoE ) connectivity for communication between all nodes in the local cluster is also supported.

Partnerships between systems for Metro Mirror or Global Mirror replication can be used with both Fibre Channel, Native Ethernet connectivity and FCoE connectivity, however direct FCoE links are only supported to a maximum of 300 metres. Distances greater than 300 metres are only supported when using an FCIP link or Fibre Channel between source and target.

Direct Attachment

For information on support configurations using direct attachment please visit the following document

Direct Attachment of Storwize and SAN Volume Controller Systems

16Gbps Fibre Channel Canister Connection

Please see SSIC for supported 16Gbps Fibre Channel configurations supported with 16Gbps node hardware. Note 16Gbps Node hardware is supported when connected to Brocade and Cisco 8Gbps or 16Gbps fabrics only. Direct connections to 2Gbps or 4Gbps SAN or direct host attachment to 2Gbps or 4Gbps ports is not supported. Other configured switches which are not directly connected to the16Gbps Node hardware can be any supported fabric switch as currently listed in SSIC.


IP Partnership

Using an Ethernet Switch to convert a 10Gbps IP partnership link to 1Gbps link and vice versa is not supported. Therefore, the IP infrastructure on the two partnership sites should both be 1Gbps or 10Gbps. However, bandwidth limiting on 10Gbps and 1Gbps IP partnership between sites is supported.

Fabric Limitation

Only one FCF ( Fibre Channel Forwarder ) switch per fabric is supported.

VMware vSphere Virtual Volumes (VVols)

The maximum number of Virtual Machines on a single VMware ESXi host in a Storwize / VVol storage configuration is limited to 680.

The use of VMware vSphere Virtual Volumes (VVols) on a system that is configured for HyperSwap is not currently supported with SVC/Storwize.

DS4000 Maintenance

Storwize V5000 supports concurrent ESM firmware upgrades for those DS4000 models listed as such on the Supported Hardware List when they are running either 06.23.05.00 or later controller firmware. However, controllers running firmware levels earlier than 06.23.05.00 will not be supported for concurrent ESM upgrades. Customers in this situation, who wish to gain support for concurrent ESM upgrades, will need to first upgrade the DS4000 controller firmware level to 06.23.05.00. This action is a controller firmware upgrade, not an ESM upgrade and concurrent controller firmware upgrades are already supported in conjunction with Storwize V5000. Once the controller firmware is at 06.23.05.00 or later the ESM firmware can be upgraded concurrently.

Note: The DS4000 ESM firmware upgrade must be done on one disk expansion enclosure at a time. A 10 minute delay from when one enclosure is upgraded to the start of the upgrade of another enclosure is required. Confirm via the Storage Manager applications "Recovery Guru" that the DS4000 status is in an optimal state before upgrading the next enclosure. If it is not, then do not continue ESM firmware upgrades until the problem is resolved.


Host Limitations

Microsoft Offload Data Transfer ( ODX ) and SDDDSM Requirements
Storwize V5000 version 7.5.0 introduced support for Microsft ODX. In order to utilise this function all windows hosts accessing Storwize V5000 are required to be at a minimum SDDDSM version of 2.4.5.0. Earlier versions of SDDDSM are not supported when the ODX function is activated.

Windows NTP server
The Linux NTP client used by Storwize V5000 may not always function correctly with Windows W32Time NTP Server


TPC Restrictions

TPC Supported Versions
The minimum supported TPC version for SVC/Storwize version 7.6 is TPC 5.2.6.

Vdisk Migration
When a vdisk is migrated between a child pool and its parent, or between child pools of the same parent, it is necessary to re-probe the cluster from TPC as no event is logged on the cluster for this type of migration

Migrations between non-child pools also currently require a re-probe

These issues will be addressed in future releases of TPC and SVC/Storwize Code.


Oracle
 
Oracle Version and OS
Restrictions that apply:
Oracle RAC 10g on Linux Host
1
Restriction 1: For RHEL4 set Oracle Clusterware 'misscount' parameter to a bigger one to allow SDD to do path failover first. The default miscount setting 60s is too short for SDD. We recommend to set it 90s or 120s. Command to use: crsctl set css misscount 90


Priority Flow Control for iSCSI

Priority Flow Control for iSCSI is supported on Brocade VDX 10-gigabit Ethernet switches only.


SCSI LUN ID 0
SAS Hosts running Linux or Vmware operating systems.

Removal of LUN's mapped to SCSI ID 0 is not supported and may result in a loss of access to the remaining LUNs


Maximum Configurations

Configuration limits for Storwize V5000:

Maximum WWPNs per host Object and iSCSI Names

The maximum number of FC/SAS/FCoE hosts ports supported per host object is 32.
The maximum number of iSCSI names supported per host object is 8.

Please see the following document

http://www-01.ibm.com/support/docview.wss?uid=ssg1S1005261

 
Property
Maximum Number
Comments
System (Cluster) Properties
Control enclosures per system (cluster)
2
Each control enclosure contains two node canisters
Nodes per system
4
Arranged as two I/O groups
Nodes per fabric
64
Maximum number of SVC and V5000 nodes that can be present on the same Fibre Channel fabric, with visibility of each other
Fabrics per system
6
The number of counterpart Fibre Channel SANs which are supported
- Up to 4 fabrics using native Fibre Channel ports
- Up to 2 fabrics using FCoE ports
Inter-cluster partnerships per system
3
A system may be partnered with up to three remote systems. No more than four systems may be in the same connected set.
A maximum of 1 IP partnership is supported per system.
USB ports
2 to 16
IP Quorum devices per system
5
Node Properties
Logins per node Fibre Channel port
512
Includes logins from server HBAs, disk controller ports, node ports within the same system and node ports from remote systems
Fibre Channel buffer credits per port - 8Gbps FC Adapter
255
The number of credits granted by the switch to the node
Fibre Channel buffer credits per port - 16Gbps FC Adapter
4095
The number of credits granted by the switch to the node
iSCSI sessions per node
1024
2048 in IP failover mode (when partner node is unavailable)
Managed Disk Properties
Managed disks (MDisks) per system
4096
The maximum number of logical units which can be managed by a system, including internal arrays.

Internal distributed arrays consume 16 logical units.

This number also includes external MDisks which have not been configured into storage pools (managed disk groups)
Managed disks per storage pool (managed disk group)
128
Internal distributed arrays consume 16 logical units.
Storage pools per system
1024
Parent pools per system
128
Child pools per system
1023
Managed disk extent size
8192 MB
Capacity for an individual internal managed disk (array)
1 PB
No limit is imposed beyond the maximum number of drives per array limits.
Maximum size is dependent on the extent size of the Storage Pool.
Comparison Table: Maximum Volume, MDisk and System capacity for each extent size.
Capacity for an individual external managed disk
1 PB
Note: External managed disks larger than 2 TB are only supported for certain types of storage systems. Refer to the supported hardware matrix for further details.
Maximum size is dependent on the extent size of the Storage Pool.
Comparison Table: Maximum Volume, MDisk and System capacity for each extent size.
Total storage capacity manageable per system
32 PB
Maximum requires an extent size of 8192 MB to be used

This limit represents the per system maximum of 2^22 extents.

Comparison Table: Maximum Volume, MDisk and System capacity for each extent size.
Volume (Virtual Disk) Properties
Basic Volumes (VDisks) per system
4096
Each Basic Volume uses 1 VDisk, each with one copy.

Maximum requires a system containing two control enclosures; refer to the volumes per I/O group limit below
HyperSwap volumes per system
1024
Each HyperSwap volume uses 4 VDisks, each with one copy, 1 active-active remote copy relationship and 4 FlashCopy mappings.
Volumes per I/O group (volumes per caching I/O group)
2048
Volumes accessible per I/O group
4096
Thin-provisioned (space-efficient) volume copies per system
8192
No limit is imposed here beyond the volume copies per system limit.
Compressed volume copies per V5030 system
400
Maximum requires a system containing two control enclosures; refer to the volumes per I/O group limit below
Compressed volume copies per V5030 I/O group
200
Volumes per storage pool
-
No limit is imposed beyond the volumes per system limit
Fully-allocated volume capacity
256 TB
Maximum size for an individual fully-allocated volume.

Maximum size is dependent on the extent size of the Storage Pool.
Comparison Table: Maximum Volume, MDisk and System capacity for each extent size.
Thin-provisioned (space-efficient) volume capacity
256 TB
Maximum size for an individual thin-provisioned volume.

Maximum size is dependent on the extent size of the Storage Pool.
Comparison Table: Maximum Volume, MDisk and System capacity for each extent size.
Host mappings per system
20,000
See also - volume mappings per host object below
Mirrored Volume (Virtual Disk) Properties
Copies per volume
2
Volume copies per system
8192
Total mirrored volume capacity per I/O group
1024 TB
Generic Host Properties
Host objects (IDs) per system
512
A host object may contain both Fibre Channel ports and iSCSI names
Host objects (IDs) per I/O group
256
Refer to the additional Fibre Channel and iSCSI host limits below
Volume mappings per host object
512
Total Fibre Channel ports and iSCSI names per system
4096
Total Fibre Channel ports and iSCSI names per I/O group
2048
Total Fibre Channel ports and iSCSI names per host object
32
iSCSI names per host object (ID)
8
Fibre Channel Host Properties (including hosts attached using FCoE)
Fibre Channel hosts per system
512
Fibre Channel host ports per system
4096
Fibre Channel hosts per I/O group
256
Fibre Channel host ports per I/O group
2048
Fibre Channel hosts ports per host object (ID)
32
iSCSI Host Properties
iSCSI hosts per system
1024
iSCSI hosts per I/O group
512
iSCSI names per host object (ID)
8
iSCSI names per I/O group
512
iSCSI (SCSI 3) registrations per VDisk
512
Copy Services Properties
Remote Copy (Metro Mirror and Global
Mirror) relationships per
system
4096
This can be any mix of Metro Mirror and Global Mirror relationships.
Active-Active Relationships
1024
This is the limit for the number of HyperSwap volumes in a system
Remote Copy relationships per consistency group
-
No limit is imposed beyond the Remote Copy relationships per system limit
Remote Copy consistency
groups per system
256
Total Metro Mirror and Global Mirror volume capacity per I/O group
1024 TB
This limit is the total capacity for all master and auxiliary volumes in the I/O group.
Total number of Global Mirror with Change Volumes relationships per system
256
Change volumes used for active-active relationships do not count towards this limit.
FlashCopy mappings per system
4096
FlashCopy targets
per source
256
FlashCopy mappings
per consistency group
512
FlashCopy consistency
groups per system
255
Total FlashCopy volume capacity per I/O group
4096 TB
IP Partnership Properties
Inter-cluster IP partnerships per system
1
A system may be partnered with up to three remote systems. A maximum of one of those can be IP and the other two FC.
I/O groups per system
2
The nodes from a maximum of two I/O groups per system can be used for IP partnership.
Inter site links per IP partnership
2
A maximum of two inter site links can be used between two IP partnership sites.
Ports per node
1
A maximum of one port per node can be used for IP partnership.
Internal Storage Properties
SAS chains per control enclosure
2
1
2
V5000 Gen 1
V5010/V5020
V5030
Enclosures per SAS chain
9/10
10
10
V5000 Gen 1 (Upto 10 on SAS port 1 and upto 9 on SAS port 2)
V5010/V5020
V5030
Expansion enclosures per control enclosure
19
10
20
V5000 Gen 1
V5010/V5020
V5030
Drives per I/O group
480
264
504
V5000 Gen 1
V5010/V5020
V5030
Drives per system
960
264
1008
V5000 Gen 1
V5010/V5020
V5030
Maximum requires a system containing two control enclosures, each with the maximum number of expansion enclosures
Min-Max drives per enclosure
0-12
or
0-24
Limit depends on the enclosure model
Non-Distributed RAID Array Properties
Arrays per system
128
Drives per array
16
Min-Max member drives per RAID-0 array
1-8
Min-Max member drives per RAID-1 array
2-2
Min-Max member drives per RAID-5 array
3-16
Min-Max member drives per RAID-6 array
5-16
Min-Max member drives per RAID-10 array
2-16
Hot spare drives
-
No limit is imposed
Distributed RAID Array Properties
Arrays per system
20
The presence of non-DRAID arrays will reduce this limit
Arrays per I/O group
10
The presence of non-DRAID arrays will reduce this limit
Drives per array
128
Min-Max member drives per RAID-5 array
4-128
Min-Max member drives per RAID-6 array
6-128
Rebuild areas per array
1-4
Min-Max stripe width for RAID-5 array
3-16
Min-Max stripe width for RAID-6 array
5-16
Max drive capacity for RAID-5 array
6 TB
External Storage System Properties
Storage system WWNNs per system (cluster)
1024
Storage system WWPNs per system (cluster)
1024
WWNNs per storage system
16
WWPNs per WWNN
16
LUNs (managed disks) per storage system
-
No limit is imposed beyond the managed disks per system limit
System and User Management Properties
User accounts per system
400
Includes the default user accounts
User groups per system
256
Includes the default user groups
Authentication servers per system
1
NTP servers per system
1
iSNS servers per system
1
Concurrent open SSH sessions per system
32
Event Notification Properties
SNMP servers per system
6
Syslog servers per system
6
Email (SMTP) servers per system
6
Email servers are used in turn until the email is successfully sent
Email users (recipients) per system
12
LDAP servers per system
6

Extents 

The following table compares the maximum volume, MDisk and system capacity for each extent size.

Extent size (MB) 
Maximum non thin-provisioned volume capacity in GB
Maximum thin-provisioned volume capacity in GB 
Maximum compressed volume size** 
Maximum MDisk capacity in GB 
Maximum DRAID Mdisk capacity in TB 
Total storage capacity manageable per system* 
16
2048 (2 TB)
2000
2048 (2 TB)
32
64 TB
32
4096 (4 TB)
4000
4096 (4 TB)
64
128 TB
64
8192 (8 TB)
8000
8192 (8 TB)
128
256 TB
128
16,384 (16 TB)
16,000
16,384 (16 TB)
256
512 TB
256
32,768 (32 TB)
32,000
32,768 (32 TB)
512
1 PB
512
65,536 (64 TB)
65,000
65,536 (64 TB)
1024 (1 PB)
2 PB
1024
131,072 (128 TB)
130,000
96TB ** 
131,072 (128 TB)
2048 (2 PB)
4 PB
2048
262,144 (256 TB)
260,000
96TB ** 
262,144 (256 TB)
4096 (4 PB)
8 PB
4096
262,144 (256 TB)
262,144
96TB ** 
524,288 (512 TB)
8192 (8 PB)
16 PB
8192
262,144 (256 TB)
262,144
96TB ** 
1,048,576 (1024 TB)
16384 (16 PB)
32 PB


* The total capacity values assumes that all of the storage pools in the system use the same extent size. 
** Please see the  following Flash

[{"Product":{"code":"STHGUJ","label":"IBM Storwize V5000"},"Business Unit":{"code":"BU058","label":"IBM Infrastructure w\/TPS"},"Component":"7.6","Platform":[{"code":"PF025","label":"Platform Independent"}],"Version":"7.6","Edition":"","Line of Business":{"code":"LOB26","label":"Storage"}}]

Document Information

Modified date:
26 March 2023

UID

ssg1S1005422