Preventive Service Planning
Abstract
This document lists the configuration limits and restrictions specific to IBM Storwize V7000 software version 8.3.0.x
Content
V8.3.0 does not support iSER Clustering on V7000 Generation 2 or Generation 2+ systems.
V8.3.0 does not support the Persistent Node IP feature of Node Rescue scenarios on V7000 Generation 2 or Generation 2+ systems.
V8.2.x, or later, does not support V7000 Generation 1. Customers with V7000 Generation 1 IO groups cannot upgrade to v8.2.x or later
The use of WAN optimisation devices such as Riverbed are not supported in native Ethernet IP partnership configurations containing Storwize V7000.
Data Reduction Pools
- Child pools are not supported in a DRP;
- VVOL is not supported in a DRP (because child pool is not supported);
- A volume in a DRP cannot be shrunk;
- No volume move between I/O groups if volume in a DRP (use FlashCopy or Metro Mirror/Global Mirror instead);
- No split of a volume mirror to copy in a different I/O group;
- Real/used/free/free/tier capacity are not reported per volume - only per pool.
Note: These restrictions are applicable to all the versions of IBM Spectrum Virtualize v8.1.2 and later
REST API
Customers that use the REST API to list more than 2000 objects can experience a loss of service, from the API, as it restarts due to memory constraints.
It is not possible to access the REST API by using a cluster's IPv6 address.
NVMe over Fibre Chanel
Hosts that use the NVMe protocol cannot be mapped to HyperSwap or stretched volumes.
Volumes accessed by hosts that use the NVMe protocol cannot be configured with multiple access I/O groups due to a limitation of the NVMe protocol.
RAID and Distributed RAID
Storwize V7000 Generation 3 systems support DRAID5, DRAID6, TRAID0, TRAID1 and TRAID10. For compressed drives only DRAID5 and DRAID6 are supported.
DRAID Strip Size
For candidate drives, with a capacity greater than 4TB, a strip size of 128 cannot be specified for either RAID-5 or RAID-6 DRAID arrays. For these drives a strip size of 256 should be used.
Transparent Cloud Tiering
Transparent cloud tiering on the system is defined by configuration limitations and rules. Click the link for details
https://www.ibm.com/docs/en/flashsystem-7x00/8.3.0?topic=ST3FR7_8.3.0/com.ibm.storwize.v7000.830.doc/svc_tctmaxlimitsconfig.html
The following restrictions apply for Transparent Cloud Tiering:
- OBAC is not supported under TCT in v8.3. You cannot set up ownership groups and then use TCT commands as OBAC users. If you want to use TCT, you need to use a non-OBAC user to execute the TCT commands either via the GUI or CLI;
- When a cloud account is created, it must continue to use the same encryption type, throughout the life of the data in that cloud account - even if the cloud account object is removed and remade on the system, the encryption type for that cloud account may not be changed while back up data for that system exists in the cloud provider;
- When performing re-key operations on a system that has an encryption enabled cloud account, perform the commit operation immediately after the prepare operation. Remember to retain the previous system master key (on USB or in Key server) as this key may still be needed to retrieve your cloud backup data when performing a T4 recovery or an import;
- Import of data is not supported from systems in which the cloud account was created on a code level prior to v7.8.1.0;
- Customers using TCT at 7.8.0.x that wish to perform the unusual command sequence of rmcloudacccount/mkcloudaccount using the same clusterid and container prefix should wait until they have upgraded to 8.1.0.0. Customers should perform the actions in e. below at 8.1.0.0 prior to performing any rmcloudaccount/mkcloudaccount sequence;
- If you have configured TCT on your system and have created backup data in the cloud provider associated with your cloud account and you are upgrading from v7.8.0.x to v8.1.0.x, then you should perform the following operations after an upgrade has completed:
svctask chsystem -name <temporary_name>
svctask chsystem -name <original_name>
This will synchronise the content of the cloud provider and the system cloud account; - Restore_uid option should not be used when backup is imported to a new cluster;
- Import of TCT data is only supported from systems whose backup data was created at v7.8.0.1;
- Transparent cloud tiering uses Sig V2, when connecting to Amazon regions, and does not currently support regions that require Sig V4.
Encryption and TCT
There is an extremely small possibility that, on a system using both Encryption and Transparent Cloud Tiering, the system can enter a state where an encryption re-key operation is stuck in 'prepared' or 'prepare_failed' state, and a cloud account is stuck in 'offline' state.
The user will be unable to cancel or commit the encryption rekey, because the cloud account is offline. The user will be unable to remove the cloud account because an encryption rekey is in progress.
The system can only be recovered from this state using a T4 Recovery procedure.
It is also possible that SAS-attached storage arrays go offline.
There are two possible scenarios where this can happen:
Scenario A
- Using USB encryption and Cloud;
- A new USB key is prepared using chencryption -usb newkey -key prepare;
- The new presumptive key is deleted from all USB sticks before the new key is committed;
- All nodes in the system are rebooted;
- The cloud account will now be offline as it can't get the presumptive key. The cloud account cannot be removed, and the encryption rekey cannot be completed or cancelled. The system will remain stuck in these cloud and encryption states;
- Any SAS-attached arrays will be offline and locked;
- The system can be restored by T4 to a previous config backup.
Scenario B
- Using key server encryption and Cloud;
- A new key server key is prepared using chencryption -keyserver newkey -key prepare;
- The new presumptive key is deleted from the key server before the new key is committed;
- All nodes in the system are rebooted;
- The cloud account will now be offline as it can't get the presumptive key. The cloud account cannot be removed, and the encryption rekey cannot be completed or cancelled. The system will remain stuck in these cloud and encryption states;
- SAS-attached arrays are not affected;
- The system can be restored by T4 to a previous config backup.
NPIV ( N_Port ID Virtualization )
SAN Volume Controller and Storwize Version 7.7 introduced support for NPIV ( N_Port ID Virtualization ) for Fibre Channel fabric attachment. FCoE is not supported with NPIV. The following recommendations and restrictions should be followed when implementing the NPIV feature.
Operating systems not currently supported for use with NPIV:
- RHEL6 and earlier on IBM Power
- HPUX 11iV2
- Veritas DMP multipathing on Windows with RAID-5 volumes in VxVM
General requirements
Required SDD versions for IBM AIX and Microsoft Windows Environments:
- IBM AIX Operating Systems require a minimum SDDPCM version of 2.6.8.0 or AIXPCM;
- Microsoft Windows requires a minimum SDDDSM version 2.4.7.0. The latest recommended level which resolves issues listed below is 2.4.7.1.
Path Optimization
User intervention may be required when changing NPIV states from "Transitional" to "Disabled". All Paths to a LUN with SDDDSM or SDDPCM may remain "Non-Optimized" when NPIV is "Disabled" from "Transitional" state.
To resolve this issue please use the following instructions:IBM AIX
For SDDPCM:
Run "pcmpath chgprefercntl device <device number>/<device number range>" on AIX. This will restore both Optimized and Non-Optimized paths for all the LUNs correctly.
Windows 2008 and 2012
For SDDDSM:
Run "datapath rescanhw" on Windows. This will restore both Optimized and Non-Optimized paths for all the LUN's correctly.
This issue is resolved with SDDDSM version 2.4.7.1
Windows 2008 and 2012 Non-Preferred Paths with SDDDSM
When NPIV enters into Transitional state from Disabled, with all the SDDDSM paths in Non-Preferred state, the paths to the Virtual ports also become Non-Preferred. This path configuration might cause IO failures as soon as NPIV moves into "Enabled" state.
As a work around user should configure "at least one preferred path" to each LUN, when in NPIV "Disabled" state.
This issue is resolved with SDDDSM version 2.4.7.1
Solaris
Emulex HBA Settings:
- When implementing NPIV on Solaris 11 the default disk IO timeout needs to be changed to 120s by adding "set sd:sd_io_time=120" in /etc/system file, A system reboot is required for the change to be implemented;
- When ports on host HBA are connected to 16GB SAN, NPIV is not supported.
Other Operating Systems
Other operating Systems may also experience the same issue when changing the NPIV state from "Transitional" to "Disabled" in which case the operating system specific rescan command should be used.
NPIV mode on SVC or Storwize is only supported when used with Brocade or Cisco fibre channel SAN switches which are NPIV capable.
Nodes in an IO group cannot be replaced by nodes with less memory when compressed volumes are present
If a customer must migrate from 64GB to 32GB memory node canisters in an IO group, they will have to remove all compressed volume copies in that IO group. This restriction applies to 7.7.0.0 and newer software.
A customer must not:
- Create an IO group with node canisters with 64GB of memory.
- Create compressed volumes in that IO group.
- Delete both node canisters from the system with CLI or GUI.
- Install new node canisters with 32GB of memory and add them to the configuration in the original IO group with CLI or GUI.
HyperSwap
When using the HyperSwap function with software version 7.8.0.0 and higher, please configure your host multipath driver to use an ALUA-based path policy.
Due to the requirement for multiple access IO groups, SAS attached host types are not supported with HyperSwap volumes.
A volume configured with multiple access I/O groups, on a system in the storage layer, cannot be virtualized by a system in the replication layer. This restriction prevents a HyperSwap volume on one system being virtualized by another.
AIX Live Partition Mobility (LPM)
AIX LPM is supported with the HyperSwap function and AIX 7.x
Clustered Systems
A Storwize V7000 system requires native Fibre Channel SAN or alternatively 8Gbps/16Gbps Direct Attach Fibre Channel connectivity for communication between all nodes in the local cluster. With the exception of Storwize V7000 Generation 3 systems, Fibre Channel over Ethernet (FCoE) connectivity for communication between all nodes in the local cluster is also supported. Clustering can also be accomplished with 25Gbps Ethernet, for standard topologies. For HyperSwap topologies a SCORE request will be required. Contact your IBM representative to raise a SCORE request.
Partnerships between systems for Metro Mirror or Global Mirror replication can be used with both Fibre Channel, Native Ethernet connectivity and FCoE connectivity, however direct FCoE links are only supported to a maximum of 300 metres. Distances greater than 300 metres are only supported when using an FCIP link or Fibre Channel between source and target.
Direct Attachment
SAN boot on Windows 2019 (Qlogic HBA) is not supported with 32GB direct-attached systems.
Cisco Nexus
The minimum level of Cisco Nexus firmware supported for FCoE with the IBM Storwize V7000 Gen2 / Gen2+ is 5.2(1)N1(2a).
16Gbps Fibre Channel Canister Connection
Please visit the IBM System Storage Inter-operation Center (SSIC) for Fibre Channel configurations supported with 16Gbps node hardware.
Note 16Gbps Node hardware is supported when connected to Brocade and Cisco 8Gbps or 16Gbps fabrics only.
Direct connections to 2Gbps or 4Gbps SAN or direct host attachment to 2Gbps or 4Gbps ports is not supported.
Other configured switches which are not directly connected to the 16Gbps Node hardware can be any supported fabric switch as currently listed in SSIC.
25Gbps Ethernet Canister Connection
Two optional 2-port 25Gbps Ethernet adapter is supported in each node canister for iSCSI communication with iSCSI capable Ethernet ports in hosts via Ethernet switches. These 2-port 25Gbps Ethernet adapters do not support FCoE.
There are two types of 25Gbps Ethernet adapter feature supported:
- RDMA over Converged Ethernet (RoCE)
- Internet Wide-area RDMA Protocol (iWARP)
Either will work for standard iSCSI communications, i.e. not using Remote Direct Memory Access (RDMA). A future software release will add (RDMA) links using new protocols that support RDMA such as NVMe over Ethernet.
When use of RDMA with a 25Gbps Ethernet adapter becomes possible then RDMA links will only work between RoCE ports or between iWARP ports (i.e. from a RoCE node canister port to a RoCE port on a host or from an iWARP node canister port to an iWARP port on a host).
The 25Gbps adapters come with SFP28 fitted, which can be used to connect to switches using OM3 optical cables.
For Ethernet switches and adapters supported in hosts visit the SSIC.
This is an example of a RoCE adapter for use in a host.
http://www.mellanox.com/related-docs/user_manuals/ConnectX-4_Lx_Single_and_Dual_10_25_Gbs_Ethernet_SFP28_Port_Adapter_Card_User_Manual.pdf
This is an example of a iWARP adapter for use in a host.
https://www.chelsio.com/nic/unified-wire-adapters/t6225-cr/
Customers who want to connect a 10Gb switch to a 25Gb HBA should be aware that this is only supported via a SCORE request. Contact your IBM representative to raise a SCORE request.
IP Partnership
IP partnerships are supported on any of the available ethernet ports. Using an Ethernet switch to convert a 25Gb to a 1Gb IP partnership, or a 10Gb to a 1Gb IP partnership is not supported. Therefore the IP infrastructure on both partnership sites must match. Bandwidth limiting on IP partnerships between both sites is supported.
Fabric Limitations
Only one FCF ( Fibre Channel Forwarder ) switch per fabric is supported.
Storage connected directly to a Cisco Fabric Extender (FEX) is not supported.
VMware vSphere Virtual Volumes (VVols)
The maximum number of Virtual Machines on a single VMware ESXi host in a Storwize / VVol storage configuration is limited to 680.
The use of VMware vSphere Virtual Volumes (VVols) on a system that is configured for HyperSwap is not currently supported with SVC/Storwize.
Host Limitations
Windows 2016 HyperV
RHEL v7.1 guests on Windows 2016 HyperV, with Virtual Fibre Channel, are not supported.
iSER
Operating systems not currently supported for use with iSER:
- VMware ESXi 6.7 using Mellanox ConnectX-4 Lx EN
- Windows 2012 R2 using Mellanox ConnectX-4 Lx EN
- Windows 2016 using Mellanox ConnectX-4 Lx EN
FCoE
Operating systems not currently supported for use with FCoE:
- Red Hat 6.x
- VMware 6.0
- Windows 2012 Hyper-V Cluster
Microsoft Offload Data Transfer ( ODX ) and SDDDSM Requirements
Storwize V7000 version 7.5.0 introduced support for Microsoft ODX. In order to utilise this function all windows hosts accessing Storwize V7000 are required to be at a minimum SDDDSM version of 2.4.5.0. Earlier versions of SDDDSM are not supported when the ODX function is activated.
Windows NTP server
The Linux NTP client used by Storwize V7000 may not always function correctly with Windows W32Time NTP Server
Oracle
Oracle Version and OS
|
Restrictions that apply:
|
Oracle Release 11.2 any platform |
1
|
Oracle Release 12.1 any platform |
Restriction 1:
Oracle ASM disk groups may dismount with the following error:
"Waited 15 secs for write IO to PST"
Recommendation
Increase the asm_hbeatiowait to 120 seconds to prevent this issue occurring.
Applies to Oracle Database - Enterprise Edition - Version 11.2.0.3 to 12.1.0.1 [Release 11.2 to 12.1] on any platform
Priority Flow Control for iSCSI/iSER
Priority Flow Control for iSCSI/iSER is supported on Emulex & Chelsio adapters (SVC supported) with all DCBX enabled switches.
Maximum Configurations
Configuration limits for Storwize V7000:
Property |
Hardware Type
|
Maximum Number
|
Comments |
System (Cluster) Properties
|
|||
Control enclosures per system (cluster) |
4
|
Each control enclosure contains two node canisters | |
Nodes per system |
8
|
Arranged as four I/O groups | |
Nodes per fabric |
64
|
Maximum number of SVC or Storwize family system nodes that can be present on the same Fibre Channel fabric, with visibility of each other | |
Fabrics per system |
8
|
The number of counterpart SANs which are supported | |
Inter-cluster partnerships per system |
3
|
A system may be partnered with up to three remote systems. No more than four systems may be in the same connected set | |
IP Quorum devices per system |
5
|
||
Data encryption keys per system |
1024
|
||
Node Properties
|
|||
Logins per node Fibre Channel WWPN |
512
|
Includes logins from server HBAs, disk controller ports, node ports within the same system and node ports from remote systems | |
Fibre Channel buffer credits per port - 8Gbps FC adapter |
255
|
The number of credits granted by the switch to the node | |
Fibre Channel buffer credits per port - 16Gbps FC adapter |
4095
|
The number of credits granted by the switch to the node | |
iSCSI sessions per node |
1024
|
2048 in IP failover mode (when partner node is unavailable). This limit includes both iSCSI Host Attach AND iSCSI Initiator sessions |
|
iSER sessions per node |
256
|
||
Managed Disk Properties
|
|||
Managed disks (MDisks) per system |
4096
|
The maximum number of logical units which can be managed by a system, including internal arrays. Internal distributed arrays consume 16 logical units. This number also includes external MDisks which have not been configured into storage pools (managed disk groups) |
|
Managed disks per storage pool (managed disk group) |
128
|
||
Storage pools per system |
1024
|
||
Parent pools per system |
128
|
||
Child pools per system |
1023
|
Not supported in a Data Reduction Pool | |
Managed disk extent size |
8192 MB
|
||
Capacity for an individual internal managed disk (array) |
-
|
No limit is imposed beyond the maximum number of drives per array limits. Maximum size is dependent on the extent size of the Storage Pool. Comparison Table: Maximum Volume, MDisk and System capacity for each extent size. |
|
Capacity for an individual external managed disk |
1 PB
|
Note: External managed disks larger than 2 TB are only supported for certain types of storage systems. Refer to the supported hardware matrix for further details. Maximum size is dependent on the extent size of the Storage Pool. Comparison Table: Maximum Volume, MDisk and System capacity for each extent size. |
|
Total storage capacity manageable per system |
32 PB
|
Maximum requires an extent size of 8192 MB to be used This limit represents the per system maximum of 2^22 extents. Comparison Table: Maximum Volume, MDisk and System capacity for each extent size. |
|
Data Reduction Pool Properties
|
|||
Data Reduction Pools per system |
4
|
||
Mdisks per Data Reduction Pool |
128
|
||
Volumes per Data Reduction Pool |
10000 - (Number of Data Reduction Pools x 12)
|
||
Extents per I/O group per Data Reduction Pool |
128000
|
||
Volume (Virtual Disk) Properties
|
|||
Basic Volumes (VDisks) per system |
10000
|
Each Basic Volume uses 1 VDisk, each with one copy. | |
HyperSwap volumes per system |
1250
|
Each HyperSwap volume uses 4 VDisks, each with one copy, 1 active-active remote copy relationship and 4 FlashCopy mappings. | |
Volumes per I/O group (volumes per caching I/O group) |
10000
|
||
Thin-provisioned (space-efficient) volume copies in regular pools per system |
-
|
No limit is imposed here beyond the volume copies per system limit. | |
Compressed volume copies in regular pools per system |
2048
|
Maximum requires a system containing four control enclosures; refer to the Compressed volume copies in regular pools per I/O group limit below | |
Compressed volume copies in regular pools per I/O group |
512
|
With 32GB Cache upgrade and 2nd Compression Accelerator card installed. | |
Compressed volume copies in data reduction pools per system |
-
|
No limit is imposed here beyond the volume copy limit per data reduction pool | |
Compressed volume copies in data reduction pools per I/O group |
-
|
No limit is imposed here beyond the volume copy limit per data reduction pool | |
Deduplicated volume copies in data reduction pools per system |
-
|
No limit is imposed here beyond the volume copy limit per data reduction pool | |
Deduplicated volume copies in data reduction pools per I/O group |
-
|
No limit is imposed here beyond the volume copy limit per data reduction pool | |
Volumes per storage pool |
-
|
No limit is imposed beyond the volumes per system limit | |
Fully-allocated volume capacity |
256 TB
|
Maximum size for an individual fully-allocated volume. Maximum size is dependent on the extent size of the Storage Pool. Comparison Table: Maximum Volume, MDisk and System capacity for each extent size. |
|
Thin-provisioned (space-efficient) per-volume capacity for volumes copies in regular and data reduction pools |
256 TB
|
Maximum size for an individual thin-provisioned volume. Maximum size is dependent on the extent size of the Storage Pool. Comparison Table: Maximum Volume, MDisk and System capacity for each extent size. |
|
Compressed volume capacity in regular pools | Pools containing non-Flash storage |
16 TB
|
Maximum size for an individual compressed volume. See this Flash for further information on this limit. Maximum size is dependent on the extent size of the Storage Pool. Comparison Table: Maximum Volume, MDisk and System capacity for each extent size. |
Pools containing all-Flash storage |
32 TB
|
||
HyperSwap volume capacity in a single I/O group using RAID |
850 TiB
|
This is due to the limit on bitmap space for mirroring and replication in each I/O group. See the Knowledge Center for details. |
|
Host mappings per system |
64000
|
See also - volume mappings per host object below | |
Mirrored Volume (Virtual Disk) Properties
|
|||
Copies per volume |
2
|
||
Volume copies per system |
10000
|
||
Total mirrored volume capacity per I/O group |
1024 TB
|
||
Generic Host Properties
|
|||
Host objects (IDs) per system |
2048
|
A host object may contain both Fibre Channel ports and iSCSI names | |
Host objects (IDs) per I/O group |
512
|
Refer to the additional Fibre Channel and iSCSI host limits below | |
Volume mappings per host object |
2048
|
Although IBM Storwize V7000 allows the mapping of up to 2048 volumes per host object, not all hosts are capable of accessing/managing this number of volumes. The practical mapping limit is restricted by the host OS, not IBM Storwize V7000. Note: this limit does not apply to hosts of type adminlun (used to support VMware vvols). |
|
Total Fibre Channel ports and iSCSI names per system |
8192
|
||
Total Fibre Channel ports and iSCSI names per I/O group |
2048
|
||
Total Fibre Channel ports and iSCSI names per host object |
32
|
||
iSCSI names per host object (ID) |
8
|
||
Host Cluster Properties
|
|||
Host clusters per system |
512
|
||
Hosts in a host cluster |
128
|
||
Fibre Channel Host Properties (including hosts attached using FCoE)
|
|||
Fibre Channel hosts per system |
2048
|
||
Fibre Channel host ports per system |
8192
|
||
Fibre Channel hosts per I/O group |
512
|
||
Fibre Channel host ports per I/O group |
2048
|
||
Fibre Channel hosts ports per host object (ID) |
32
|
||
Simultaneous I/Os per node FC port |
8Gbps FC adapter
|
2048
|
|
16Gbps FC adapter
|
4096
|
||
iSCSI Host Properties
|
|||
iSCSI hosts per system |
2048
|
||
iSCSI hosts per I/O group |
512
|
||
iSCSI names per host object (ID) |
8
|
||
iSCSI names per I/O group |
512
|
||
iSCSI Hardware Properties
|
|||
10Gbps Ethernet adapters per canister |
V7000 Gen 2
V7000 Gen 2+ |
2
|
|
10Gbps Ethernet ports per canister |
V7000 Gen 2
V7000 Gen 2+ |
8
|
FCoE is supported on the first four 10GbE ports in the system |
iSER Host Properties
|
|||
iSER hosts per system |
2048
|
||
iSER hosts per I/O group |
512
|
||
iSER names per host object (ID) |
8
|
||
iSER Hardware Properties
|
|||
25Gbps iWARP adapters per canister |
V7000 Gen 2
V7000 Gen 2+ |
2
|
|
V7000 Gen 3
|
3
|
||
25Gbps ROCE adapters per canister |
V7000 Gen 2
V7000 Gen 2+ |
2
|
|
V7000 Gen 3
|
3
|
||
25Gbps iWARP ports per canister |
V7000 Gen 2
V7000 Gen 2+ |
4
|
|
V7000 Gen 3
|
6
|
||
25Gbps ROCE ports per canister |
V7000 Gen 2
V7000 Gen 2+ |
4
|
|
V7000 Gen 3
|
6
|
||
NVMe over Fibre Channel Host Properties
|
|||
FC-NVMe hosts per system |
V7000 Gen 3
|
32
|
Up to 32 FC-NVMe hosts are supported per system. This limit is not policed by the Spectrum Virtualize software. Any configurations that exceed this limit may experience significant adverse performance impact. |
FC-NVMe hosts per I/O group |
V7000 Gen 3
|
16
|
This limit is not policed by the Spectrum Virtualize software. Any configurations that exceed this limit may experience significant adverse performance impact. |
Fibre Channel Logins per FC-NVMe WWPN | V7000 Gen 3 | 16 | These are the number of FC2 logins supported. |
NVMe Qualified Names (NQNs) per host object (ID) |
V7000 Gen 3
|
2
|
|
Copy Services Properties
|
|||
Remote Copy (Metro Mirror and Global Mirror) relationships per system |
10000
|
This can be any mix of Metro Mirror and Global Mirror relationships. | |
Active-Active Relationships (HyperSwap) per system |
1250
|
||
Remote Copy relationships per consistency group (<=256 GMCV relationships configured) |
-
|
No limit is imposed beyond the Remote Copy relationships per system limit. Refer to the Changes to support for Global Mirror with Change Volumes page for information relating to GMCV performance considerations and best practice. |
|
Remote Copy relationships per consistency group (>256 GMCV relationships configured) |
200
|
||
Remote Copy consistency groups per system |
256
|
||
Total Metro Mirror and Global Mirror volume capacity per I/O group |
1024 TB
|
This limit is the total capacity for all master and auxiliary volumes in the I/O group. | |
Total number of Global Mirror with Change Volumes relationships per system |
V7000 Gen 2
|
256
|
60s cycle time (Change volumes used for active-active relationships do not count toward this limit). |
1500
|
300s cycle time (Change volumes used for active-active relationships do not count toward this limit). | ||
V7000 Gen 2+
V7000 Gen 3 |
256
|
60s cycle time (Change volumes used for active-active relationships do not count toward this limit). | |
2500
|
300s cycle time (Change volumes used for active-active relationships do not count toward this limit). | ||
FlashCopy mappings per system |
5000
|
||
FlashCopy targets per source |
256
|
||
FlashCopy mappings per consistency group |
512
|
||
FlashCopy consistency groups per system |
500
|
||
Total FlashCopy volume capacity per I/O group |
4096 TB
|
||
IP Partnership Properties
|
|||
Inter-cluster IP partnerships per system |
1
|
A system may be partnered with up to three remote systems. A maximum of one of those can be IP and the other two FC. | |
I/O groups per system |
2
|
The nodes from a maximum of two I/O groups per system can be used for IP partnership. | |
Inter-site links per IP partnership |
2
|
A maximum of two inter-site links can be used between two IP partnership sites. | |
Ports per node |
1
|
A maximum of one port per node can be used for IP partnership. | |
Internal Storage Properties
|
|||
SAS chains per control enclosure |
2
|
||
Expansion enclosures per SAS chain |
10
|
||
Expansion enclosures per control enclosure |
20
|
||
Drives per I/O group |
760
|
||
Drives per system |
3040
|
||
Non-Distributed RAID Array Properties
|
|||
Arrays per system |
128
|
||
Encrypted arrays per system |
128
|
||
Drives per array |
16
|
||
Min-Max member drives per RAID-0 array |
1-8
|
Not supported by Generation 3 | |
Min-Max member drives per RAID-1 array |
2-2
|
Not supported by Generation 3 | |
Min-Max member drives per RAID-5 array |
3-16
|
Not supported by Generation 3 | |
Min-Max member drives per RAID-6 array |
5-16
|
Not supported by Generation 3 | |
Min-Max member drives per RAID-10 array |
2-16
|
||
Hot spare drives |
-
|
No limit is imposed | |
Distributed RAID Array Properties
|
|||
Arrays per system |
32
|
The presence of non-DRAID arrays will reduce this limit | |
Encrypted arrays per system |
32
|
The presence of non-DRAID arrays will reduce this limit | |
Arrays per I/O group |
10
|
The presence of non-DRAID arrays will reduce this limit | |
Drives per array |
128
|
||
Min-Max member drives per RAID-5 array |
4-128
|
||
Min-Max member drives per RAID-6 array |
6-128
|
||
Rebuild areas per non-FCM array |
1-4
|
||
Rebuild areas per FCM array |
1
|
||
Min-Max stripe width for RAID-5 array |
3-16
|
||
Min-Max stripe width for RAID-6 array |
5-16
|
||
Max drive capacity for RAID-5 array |
8 TB
|
||
External Storage System Properties
|
|||
Storage system WWNNs per system (cluster) |
1024
|
||
Storage system WWPNs per system (cluster) |
1024
|
||
WWNNs per storage system |
16
|
||
WWPNs per WWNN |
16
|
||
LUNs (managed disks) per storage system |
-
|
No limit is imposed beyond the managed disks per system limit | |
System and User Management Properties
|
|||
User accounts per system |
400
|
Includes the default user accounts | |
User groups per system |
256
|
Includes the default user groups | |
Authentication servers per system |
1
|
||
NTP servers per system |
1
|
||
iSNS servers per system |
1
|
||
Concurrent active SSH sessions per system |
32
|
||
Event Notification Properties
|
|||
SNMP servers per system |
6
|
||
Syslog servers per system |
6
|
||
Email (SMTP) servers per system |
6
|
Email servers are used in turn until the email is successfully sent | |
Email users (recipients) per system |
12
|
||
LDAP servers per system |
6
|
||
REST API Properties
|
|||
Maximum active connections per cluster | 4 | RESTful API | |
Maximum requests/sec to auth endpoint | 3 | RESTful API | |
Maximum requests/sec to command endpoints | 10 | RESTful API | |
Number of simultaneous CLIs in progress | 1 | System |
Extents
The following table compares the maximum volume, MDisk and system capacity for each extent size.
Extent size (MB)
|
Maximum non thin-provisioned volume capacity in GB
|
Maximum thin-provisioned volume capacity in GB (for regular pools)
|
Maximum compressed volume size (for regular pools) **
|
Maximum thin-provisioned and compressed volume size in data reduction pools in GB
|
Maximum total thin-provisioned and compressed capacity for all volumes in a single data reduction pool per IOgroup in GB
|
Maximum MDisk capacity in GB
|
Maximum DRAID Mdisk capacity in TB
|
Total storage capacity manageable per system*
|
16
|
2048 (2 TB)
|
2000
|
2TB
|
2048 (2 TB)
|
2048 (2 TB)
|
2048 (2 TB)
|
32
|
64 TB
|
32
|
4096 (4 TB)
|
4000
|
4TB
|
4096 (4 TB)
|
4096 (4 TB)
|
4096 (4 TB)
|
64
|
128 TB
|
64
|
8192 (8 TB)
|
8000
|
8TB
|
8192 (8 TB)
|
8192 (8 TB)
|
8192 (8 TB)
|
128
|
256 TB
|
128
|
16,384 (16 TB)
|
16,000
|
16TB
|
16,384 (16 TB)
|
16,384 (16 TB)
|
16,384 (16 TB)
|
256
|
512 TB
|
256
|
32,768 (32 TB)
|
32,000
|
32TB
|
32,768 (32 TB)
|
32,768 (32 TB)
|
32,768 (32 TB)
|
512
|
1 PB
|
512
|
65,536 (64 TB)
|
65,000
|
64TB
|
65,536 (64 TB)
|
65,536 (64 TB)
|
65,536 (64 TB)
|
1024 (1 PB)
|
2 PB
|
1024
|
131,072 (128 TB)
|
130,000
|
96TB **
|
131,072 (128 TB)
|
131,072 (128 TB)
|
131,072 (128 TB)
|
2048 (2 PB)
|
4 PB
|
2048
|
262,144 (256 TB)
|
260,000
|
96TB **
|
262,144 (256 TB)
|
262,144 (256 TB)
|
262,144 (256 TB)
|
4096 (4 PB)
|
8 PB
|
4096
|
262,144 (256 TB)
|
262,144
|
96TB **
|
262,144 (256 TB)
|
524,288 (512 TB)
|
524,288 (512 TB)
|
8192 (8 PB)
|
16 PB
|
8192
|
262,144 (256 TB)
|
262,144
|
96TB **
|
262,144 (256 TB)
|
1,048,576 (1024 TB)
|
1,048,576 (1024 TB)
|
16384 (16 PB)
|
32 PB
|
* The total capacity values assumes that all of the storage pools in the system use the same extent size.
** Please see the following Flash
Was this topic helpful?
Document Information
Modified date:
16 June 2023
UID
ibm10885889