IBM Support

V8.5.4.x Configuration Limits and Restrictions for IBM System Storage SAN Volume Controller

Preventive Service Planning


Abstract

This document lists the configuration limits and restrictions specific to SAN Volume Controller software version 8.5.4.x

Content

V8.5.4.x does not support SV1 or earlier node types. Only SV2 or later nodes can be upgraded to v8.5.4.x.

The use of WAN optimization devices such as Riverbed is not supported in native Ethernet IP partnership configurations containing SAN Volume Controller.


Safeguarded Copy

Requires the purchase of the additional FlashCopy license.

The following restrictions apply for Safeguarded Copy:

  1. Mirrored volumes cannot be safeguarded. Stretched cluster is not supported
  2. Mirroring of existing safeguarded source volumes is supported for migration purposes only
  3. HyperSwap volumes are supported. However, recovery requires that they be converted to regular volumes before use
  4. Pre-defined schedules are designed to avoid running out of FlashCopy maps in a single graph and keep within the supported volumes count. It is possible to create policies (that use the CLI only) that can, potentially, breach those limits. Caution should be exercised
  5. The GUI does not support creating user-defined policies but can display any created that use the CLI.
  6. The source volume cannot be in an ownership group
  7. The source volume cannot be used with Transparent Cloud Tiering (TCT).

Volume Mobility

The following restrictions apply for Volume Mobility (nondisruptive volume move between systems):

  1. No 3-site support
  2. Not intended to be a DR or HA solution
  3. No support for consistency groups, change volumes, or expanding volumes
  4. Reduced host interoperability support. Only the following host operating systems are supported
    • RHEL
    • SLES
    • ESXi
    • Solaris
    • HP-UX
    • AIX
  5. SCSI only. Fibre Channel and iSCSI supported. NVMe not supported
  6. No SCSI persistent reservations or Offloaded Data Transfer (ODX).

Data Reduction Pools

The following restrictions apply for Data Reduction Pools (DRP):

  1. VMware vSphere Virtual Volumes (vVols) are not supported in a DRP
  2. A volume in a DRP cannot be shrunk
  3. No volume can move between I/O groups when the volume is a DRP (use FlashCopy or Metro Mirror / Global Mirror instead).
  4. No split of a volume mirror to copy in a different I/O group
  5. Real/used/free/free/tier capacities are not reported per volume - only per pool.

DRAID Strip Size

For candidate drives, with a capacity greater than 4TB, a strip size of 128 cannot be specified for either RAID-5 or RAID-6 DRAID arrays, instead use a strip size of 256.


Non-Disruptive Volume Move (NDVM)

The following Fibre Channel attached host types are supported for nondisruptively moving a volume between I/O groups:

Host Operating System Host Multipathing Host Clustering Notes
AIX 7.2 AIXPCM
Nondisruptive volume move can result in the same volume being mapped to different hosts in the same host cluster that uses different SCSI ID. If the host cluster cannot tolerate this configuration, then nondisruptive volume move cannot be used.
SAN boot is supported
NPIV is supported
Microsoft Windows 2019 MSDSM
Hyper-V Failover Cluster
SAN boot is supported
Microsoft Windows 2016 MSDSM
Hyper-V Failover Cluster
SAN Boot is supported
Red Hat 8 Native
The original paths might need to be manually removed on the host before removing access to the old I/O group
SLES 15 Native The original paths might need to be manually removed on the host before removing access to the old I/O group
VMware 6.7 Native VAAI is supported
VMware 6.5 Native VAAI is supported
Solaris 11.3 SPARC MPXIO SAN boot is supported

Note: For all other host types, I/O needs to be quiesced before moving a volume.

If moving a volume that is mapped to a host cluster, it is required to rescan disk paths on all host cluster nodes to ensure the new paths are detected before removing access from the original I/O group.


Clustered Systems

A SAN Volume Controller system requires native Fibre Channel SAN or alternatively 8 Gbps or 16 Gbps or 32 Gbps Direct Attach Fibre Channel connectivity for communication between all nodes in the local cluster. Clustering can also be accomplished with 25 Gbps Ethernet, for standard topologies.

Partnerships between systems for Metro Mirror or Global Mirror replication can be used with both Fibre Channel and native Ethernet connectivity. Distances greater than 300 meters are supported by using an FCIP link or Fibre Channel between source and target.

Clustering over Fibre Channel Clustering over 25 Gb Ethernet HyperSwap over Fibre Channel HyperSwap over Ethernet (25 Gb only) Metro / Global Mirror replication over Fibre Channel Metro / Global Mirror replication over Ethernet (10 Gb or 25 Gb)
Yes up to 4 I/O groups Yes up to 4 I/O groups Yes up to 4 I/O groups Yes up to 4 I/O groups Yes Yes (including 1 Gb on older hardware)


Hot Spare Node

In a situation where an adapter PCI slot location on the spare node does not match the active nodes, an active node cannot be replaced by a spare node by using the 'swapnode' command. If the user encounters the error "CMMVC9261E, means that the command failed because the specified node does not have a status of "candidate". It is recommended to have adapters in the same slot for spare nodes and the active node for 'swapnode' replace command to work.

When the online spare node is put into the Service state, it is immediately removed back to spare and 5 minutes later rejoin as online spare, in the cluster. The user can, instead of putting the online spare into service, wait for the original to come back or simply remove the online spare and then perform their maintenance.


Transparent Cloud Tiering

Transparent cloud tiering on the system is defined by configuration limitations and rules. Refer to the IBM Documentation maximum limits page for details.

The following restrictions apply for Transparent Cloud Tiering:

  1. When a cloud account is created, it must continue to use the same encryption type, throughout the life of the data in that cloud account. Even if the cloud account object is removed and remade on the system, the encryption type for that cloud account cannot be changed while back up data for that system exists in the cloud provider.
  2. Performing rekey operations on a system with an encryption enabled cloud account, perform the commit operation immediately after the prepare operation. Remember to retain the previous system master key (on USB or in Key server) as this key can still be needed to retrieve your cloud backup data when performing a T4 recovery or an import.
  3. Avoid the use of the 'Restore_uid' option when backup is imported to a new cluster.
  4. Import of TCT data is only supported from systems whose backup data was created at v7.8.0.1 or later.

The following AWS regions are supported by this code-level:

  • us-east-1
  • us-west-1
  • us-west-2
  • ca-central-1
  • eu-west-1
  • eu-west-2
  • eu-west-3
  • eu-central-1
  • sa-east-1
  • ap-southeast-1
  • ap-southeast-2
  • ap-south-1
  • ap-northeast-1
  • ap-northeast-2

Snapshots and TCT
TCT cloud snapshots are supported with the system's new FlashCopy® management model based on snapshot function.
However, cloud snapshots cannot co-exist with the user-owned legacy FlashCopy mappings.
 
For more information, refer to the following links:-
 

Encryption and TCT

There is a small possibility that, on a system that uses both Encryption and Transparent Cloud Tiering, the system can enter a state where an encryption rekey operation is stuck in 'prepared' or 'prepare_failed' state, and a cloud account is stuck in 'offline' state. The user is unable to cancel or commit the encryption rekey because the cloud account is offline. The user is unable to remove the cloud account because an encryption rekey is in progress.
The system can be recovered from this state by using a T4 Recovery procedure.
It is also possible that SAS-attached storage arrays go offline.

The 2 scenarios identify where this might happen:

Scenario A

  1. Using USB encryption and Cloud.
  2. A new USB key is prepared by using 'chencryption -usb newkey -key prepare'.
  3. The new presumptive key is deleted from all USB sticks before the new key is committed.
  4. All nodes in the system are rebooted.
  5. The cloud account is offline as it cannot get the presumptive key. The cloud account cannot be removed, and the encryption rekey cannot be completed or cancelled. The system remains stuck in these cloud and encryption states.
  6. Any SAS-attached arrays are offline and locked.
  7. The system can be restored by T4 to a previous config backup.

Scenario B

  1. Using key server encryption and Cloud.
  2. A new key server key is prepared by using 'chencryption -keyserver newkey -key prepare'.
  3. The new presumptive key is deleted from the key server before the new key is committed.
  4. All nodes in the system are rebooted.
  5. The cloud account is offline as it cannot get the presumptive key. The cloud account cannot be removed, and the encryption rekey cannot be completed or cancelled. The system remains stuck in these cloud and encryption states.
  6. SAS-attached arrays are not affected.
  7. The system can be restored by T4 to a previous config backup.

NPIV (N_Port ID Virtualization)

The following recommendations and restrictions can be followed if implementing NPIV:

FCoE is not supported by NPIV.

Operating systems not currently supported for use with NPIV:

  • HPUX 11iV2
  • Veritas DMP multipathing on Windows with RAID-5 volumes in VxVM

Other Operating Systems

Other operating Systems might also experience the same issue if modifying the NPIV state from "Transitional" to "Disabled", in which case the operating system-specific rescan command can be used.

Fabric Attachment

NPIV mode is only supported when used with Brocade or Cisco Fibre Channel SAN switches that are NPIV capable.


Node Memory

Nodes in an I/O group cannot be replaced by nodes with less memory when compressed volumes are present.

If a customer must migrate from 64 GB to 32 GB memory node canisters in an I/O group, they have to remove all compressed volume copies in that I/O group.

A customer must not:

  1. Create an I/O group with node canisters with 64 GB of memory.
  2. Create compressed volumes in that I/O group.
  3. Delete both node canisters from the system with CLI or GUI.
  4. Install new node canisters with 32 GB of memory and add them to the configuration in the original I/O group with CLI or GUI.

HyperSwap

Configure your host multipath driver to use an ALUA-based path policy.

Due to the requirement for multiple access I/O groups, SAS attached host types are not supported by HyperSwap volumes.

A volume configured with multiple access I/O groups, on a system in the storage layer, cannot be virtualized by a system in the replication layer.  This restriction prevents a HyperSwap volume on one system being virtualized by another.

AIX Live Partition Mobility (LPM) 
AIX LPM is supported by the HyperSwap function and AIX 7.


Direct Attachment

IBM System Storage DS8000 series is not supported by direct-attached systems.

SAN boot on Windows 2019 (Qlogic HBA) is not supported by 32 GB direct-attached systems.


Cisco Nexus

The minimum level of Cisco Nexus firmware supported for FCoE with the IBM 2145-SA2, 2145-SV2, 2145-SV3 is 5.2(1)N1(2a).


16 Gbps Fibre Channel Node Connection

Refer to the IBM System Storage Inter-operation Center (SSIC) for supported 16 Gbps Fibre Channel configurations supported by 16 Gbps node hardware.

Note 16 Gbps Node hardware is supported when connected to Brocade and Cisco 8 Gbps or 16 Gbps fabrics only.

Direct connections to 2 Gbps or 4 Gbps SAN or direct host attachment to 2 Gbps or 4 Gbps ports is not supported.

Other configured switches that are not directly connected to the 16 Gbps Node hardware can be any supported fabric switch as currently listed in the SSIC.


25 Gbps Ethernet Canister Connection

Three optional (four in the case of SV3 model nodes) 2-port 25 Gbps Ethernet adapters are supported in each SAN Volume Controller node for iSCSI communication with iSCSI capable Ethernet ports is hosts connect through Ethernet switches. These 25 Gbps Ethernet adapters do not support FCoE.

There are two types of 25 Gbps Ethernet adapter Feature supported:

  1. RDMA over Converged Ethernet (RoCE)
  2. Internet Wide-area RDMA Protocol (iWARP)

Either works for standard iSCSI communications, that is, not using Remote Direct Memory Access (RDMA).

When use of RDMA with a 25 Gbps Ethernet adapter becomes possible then RDMA links work between RoCE ports or between iWARP ports (that is, from a RoCE node canister port to a RoCE port on a host or from an iWARP node canister port to an iWARP port on a host).

For Ethernet switches and adapters supported in hosts, visit the SSIC.

Example of a RoCE adapter for use in a host

Example of an iWARP adapter for use in a host


IP Partnership

On SV3 node hardware, IP partnerships are not supported on 1 Gb ethernet ports - those are only for system management. For other SVC node types, IP replication can be configured on any ethernet port.

Using an Ethernet switch to convert a 25 Gb to a 1 Gb IP partnership, or a 10 Gb to a 1 Gb IP partnership is not supported. Therefore, the IP infrastructure on both partnership sites must match. Bandwidth limiting on IP partnerships between both sites is supported.


VMware vSphere Virtual Volumes (vVols)

The maximum number of virtual machines on a single VMware ESXi host in an SVC / vVol storage configuration is limited to 680.

The use of vVols on a system that is configured for HyperSwap is not currently supported.


Host Limitations

SAN BOOT function on AIX 7.2 TL5
SAN BOOT is not supported for AIX 7.2 TL5 when connected by using the NVME/FC protocol.

RDM Volumes attached to guests in VMware 7.0
Using RDM (raw device mapping) volumes attached to any guests, with the RoCE iSER protocol, results in pathing issues or inability to boot the guest.

N2225/N2226 SAS HBA
VMware 6.7 (Guest O/S SLES12SP4) connected with SAS N2225/N2226 host adapters are not supported.

Lenovo 430-16e/8e SAS HBA
VMware 6.7 and 6.5 (Guest O/S SLES12SP4) connected with SAS Lenovo 430-16e/8e host adapters are not supported.
Windows 2019 and 2016 connected with SAS Lenovo 430-16e/8e host adapters are not supported.

Windows 2016 HyperV
RHEL v7.1 guests on Windows 2016 HyperV, with Virtual Fibre Channel, are not supported.

iSER

SV3 model nodes do not support iSER host attachment.

Operating systems not currently supported for use with iSER:

  • Windows 2012 R2 with Mellanox ConnectX-4 Lx EN adapters
  • Windows 2016 with Mellanox ConnectX-4 Lx EN adapters

Windows NTP server 
The Linux NTP client used by SAN Volume Controller may not always function correctly with Windows W32Time NTP Server


Fabric Limitations

Only one FCF (Fibre Channel Forwarder) switch per fabric is supported.

Storage connected directly to a Cisco Fabric Extender (FEX) is not supported.


Priority Flow Control for iSCSI / iSER

Priority Flow Control for iSCSI / iSER is supported on Emulex & Chelsio adapters (SVC supported) with all DCBX enabled switches.


Policy-based Replication
 
The following configuration limits and restrictions apply to policy-based replication:
  1. The name of a volume group cannot be changed while a replication policy is assigned.
  2. The name of a volume cannot be changed while the volume is in a volume group with a replication policy assigned.
  3. Ownership groups are not supported by policy-based replication.
  4. Policy-based replication is not supported on HyperSwap topology systems.
  5. Policy-based replication cannot be used with volumes that are:
  •    Image mode
  •    HyperSwap
  •    Part of a remote-copy relationship
  •    Configured to use Transparent Cloud Tiering (TCT)
  •    VMware vSphere Virtual Volumes (vVols)
 
The following actions cannot be performed on a volume while the volume is in a volume group with a replication policy assigned:
 
  1. Resize (expand or shrink)
  2. Migrate to image mode, or add an image mode copy
  3. Move to a different I/O group

Maximum Configurations

Configuration limits for SAN Volume Controller:
Property 
Hardware Type 
Maximum Number
Notes 
System (Cluster) Properties
Active nodes per system (cluster)
8
Arranged as four I/O groups
Spare Nodes per system 4
Nodes per fabric
64
Maximum number of Spectrum Virtualize nodes that can be present on the same Fibre Channel fabric, with visibility of each other
I/O groups / Control Enclosures per system (cluster)
4
Each containing two nodes
Fabrics per system
12
The number of counterpart SANs, which are supported
Inter-cluster partnerships per system
3
A system can be partnered with up to three remote systems. No more than four systems can be in the same connected set
IP Quorum devices per system
5
Data encryption keys per system
1,024
Key servers per system 4
Node Properties
Logins per node Fibre Channel WWPN
512
Includes logins from server HBAs, disk controller ports, node ports within the same system and node ports from remote systems
Fibre Channel buffer credits per port
16 Gbps FC adapter
4,095
The number of credits granted by the switch to the node
Portset objects per system 72 FC + Ethernet
IP address objects per system 2,048 Includes duplicated IP addresses
IP address objects per node 256
IP addresses per port 64
When a node fails over, Ethernet ports with the same ID will be configured with all the IP addresses of the partner. Hence there can be a maximum of 128 IP addresses configured per Ethernet port during failover.
For Emulex ports, there can be a maximum of 3 unique VLANs per port and a maximum of 32 IP addresses per port.
For Mellanox iSER connectivity, there can be a maximum of 31 VLANs per port and a maximum of 31 IP addresses per port with VLAN.
Routable IP addresses per port 1
IP addresses per node per portset Host 4
Remote Copy 1
Storage Number of Ethernet Ports on node
FC ports per portset 4
iSCSI sessions per node
1,024
A maximum of 256 can be backend sessions.
This limit includes both iSCSI Host Attach AND iSCSI Initiator sessions
FC Host objects per portset Same as the maximum number of hosts supported on that platform
iSER sessions per node
256
Model SA2/SV2 only
iSCSI + iSER sessions per node 1,088
Managed Disk Properties
Managed disks (MDisks) per system
4,096
The maximum number of logical units, which can be managed by a cluster.

Internal distributed arrays consume 16 logical units.

This number includes external MDisks that have not been configured into storage pools (managed disk groups)
Managed disks per storage pool (managed disk group)
128
Storage pools per system
1,024
Parent pools per system
128
Child pools per system
1,023
Managed disk extent size
8,192 MB
Capacity for an individual internal managed disk (array)
-
No limit is imposed beyond the maximum number of drives per array limits.
Maximum size depends on the extent size of the Storage Pool.
Comparison Table: Maximum Volume, MDisk, and System capacity for each extent size.
Capacity for an individual external managed disk
1 PB
Note: External managed disks larger than 2 TB are only supported for certain types of storage systems. Refer to the supported hardware matrix for further details.
Maximum size depends on the extent size of the Storage Pool.
Comparison Table: Maximum Volume, MDisk, and System capacity for each extent size.
Total storage capacity manageable per system
32 PB
Maximum requires an extent size of 8192 MB to be used

This limit represents the per system maximum of 2^22 extents.

Maximum size depends on the extent size of the Storage Pool.
Comparison Table: Maximum Volume, MDisk, and System capacity for each extent size.
Maximum Provisioning policies 32
Data Reduction Pool Properties
Data Reduction Pools per system
4
MDisks per Data Reduction Pool
128
Volume copies per Data Reduction Pool
15,864
Extents per I/O group per Data Reduction Pool
524,288 (512K)
Volume (Virtual Disk) Properties
Basic volumes per system
15,864
Each basic volume uses one VDisk, each with one copy.

If a Remote Copy partnership exists to a system that supports a lower number of volumes, the maximum number of volumes is reduced to the lower limit, or 8192 if that is greater.

For example, if one system has a limit of 15864 volumes and the other has a limit of 8192 volumes, both systems are limited to 8192 volumes.

Volume copies in host mappable volumes per system
15,864
Stretched volumes per system
7,932
Each stretched volume uses 1 VDisk, each with two copies.
HyperSwap volumes per system 2,000 Each HyperSwap volume uses 4 VDisks, each with one copy, 1 active-active remote copy relationship, and 4 FlashCopy mappings.
Volume copies per volume 2
Total mirrored volume capacity per I/O group 1 PB
Volumes per I/O group
(Volumes per caching I/O group)
- No limit is imposed here beyond the volumes per system limit.
Volume groups per system 1,024
Volumes per volume group 512
Volumes per storage pool - No limit is imposed beyond the volumes per system limit
Fully-allocated volume capacity 256 TB Maximum size for an individual fully allocated volume.

Maximum size depends on the extent size of the Storage Pool.
Comparison Table: Maximum Volume, MDisk, and System capacity for each extent size.
Thin-provisioned (space-efficient) per-volume capacity for volumes in regular and data reduction pools 256 TB Maximum size for an individual thin-provisioned volume

Maximum size depends on the extent size of the Storage Pool.
Comparison Table: Maximum Volume, MDisk, and System capacity for each extent size.
Host mappings per system 64,000 See also - volume mappings per host object
Host Properties
Host objects (IDs) per system
2,048
Host objects (IDs) per I/O group
2145-SA2
2145-SV2
512
2145-SV3 2,048
Volume mappings per host object
2,048
Although SVC allows the mapping of up to 2048 volumes per host object, not all hosts are capable of accessing or managing this number of volumes. The practical mapping limit is restricted by the host OS, not SVC.
Note: this limit does not apply to hosts of type adminlun (used to support VMware vVols).
Unique IP addresses per port 64
Host Cluster Properties
Host clusters per system
512
Hosts in a host cluster
128
Fibre Channel Host Properties
Fibre Channel hosts per system
2,048
Fibre Channel host ports per system
8,192
Fibre Channel hosts per I/O group
2145-SA2
2145-SV2
512
2145-SV3 2,048
Fibre Channel host ports per I/O group
2,048
NPIV Direct Attach Logins per Fibre Channel WWPN 128
Fibre Channel host ports per host object (ID)
32
iSCSI Host Properties
iSCSI hosts per system
2,048
iSCSI hosts per I/O group
512
iSCSI names per host object
4
iSCSI names per I/O group
512
iSER Host Properties
iSER hosts per system
2,048
Model SA2/SV2 only.
iSER hosts per I/O group
512
Model SA2/SV2 only.
iSER names per host object
4
Model SA2/SV2 only.
Adapter Hardware Properties
4-port 16 Gbps FC adapters per node / canister 3 Model SA2/SV2 only.
4-port 32 Gbps FC adapters per node / canister 3 Model SA2/SV2 only.
On board 1 Gbps Ethernet I/O ports per node / canister
2145-SA2
2145-SV2
1 Technician port
2145-SV3 3 3 management ports (including 1 technician port)
On board 10 Gbps Ethernet I/O ports per node / canister 4 Model SA2/SV2 only.
2-port 25 Gbps iWARP adapters per node / canister 3 Model SA2/SV2 only
5 Model SV3 only
2-port 25 Gbps RoCE adapters per node / canister 3 Model SA2/SV2 only
5 Model SV3 only
NVMe over Fibre Channel Host Properties
FC-NVMe hosts per system
64
This limit is not policed by the Spectrum Virtualize software. Any configurations that exceed this limit might experience significant adverse performance impact.
FC-NVMe hosts per I/O group
16
This limit is not policed by the Spectrum Virtualize software. Any configurations that exceed this limit might experience significant adverse performance impact.
Fibre Channel Logins per FC-NVMe WWPN 16 This limit is the number of FC2 logins supported.
NVMe Qualified Names (NQNs) per host object (ID)
2
NVMe over RDMA hosts per system 768
NVMe over RDMA hosts per I/O group
2145-SA2
2145-SV2
512
2145-SV3 768
Primary RDMA connections per port 256
Copy Services Properties
Total Metro Mirror, Global Mirror, and HyperSwap capacity per I/O group 2145-SA2 1,024 TB
2145-SV2
2145-SV3
2,048 TB
Remote Copy (Metro Mirror and Global Mirror) relationships per system
10,000
This can be any mix of Metro Mirror and Global Mirror relationships.
Remote Copy migration relationships per system 256
Maximum round-trip latency for Metro Mirror, HyperSwap, and Migration relationships 3ms
Global Mirror cycling mode relationships (also known as GMCV) per system, with cycle times less than 300 seconds
256
Global Mirror cycling mode relationships (also known GMCV) per system, with cycle times of 300 seconds or more
2,500
Active-Active Relationships (HyperSwap) 2,000
Maximum round-trip latency for FC replication 80ms 250ms in certain zoning
Remote Copy relationships per consistency group for Metro Mirror, Global Mirror, and Active-Active (HyperSwap) relationships - No limit is imposed beyond the Remote Copy relationships per system limit.
Remote Copy relationships per consistency group, for Global Mirror cycling mode relationships (also known GMCV) 256
Remote Copy consistency groups per system
256
Total Metro Mirror, Global Mirror, and HyperSwap capacity per I/O group 2145-SA2 1 PB This limit is the total capacity for all master and auxiliary volumes in the I/O group.
2145-SV2
2145-SV3
2 PB
3-site Remote Copy (Metro Mirror) relationships per consistency group 256
3-site Remote Copy (Metro Mirror) consistency groups per system 16
3-site Remote Copy (Metro Mirror) relationships per system 2,500
3-site HyperSwap Remote Copy relationships per system 2,000
FlashCopy Properties
FlashCopy mappings per system
15,864
FlashCopy consistency groups per system 500
FlashCopy mappings per consistency group 512
FlashCopy targets per source
256
Snapshots per system 15,863
Snapshots per volume copy 15,863
Thin-Clone, Clone Volumes per system 15,862
Thin-Clone Volumes per Source Volume 15,862
Clone Volumes per Source Volume 15,862
Total FlashCopy Bitmap Allowance
- of which legacy FlashCopy can have up to
2145-SA2
2145-SV2
2 GiB
2145-SV3 4 GiB
2145-SA2
2145-SV2
2 GiB
2145-SV3 4 GiB
Total FlashCopy volume capacity per I/O group
- of which legacy FlashCopy can have up to
2145-SA2
2145-SV2
4 PiB
2145-SV3 8 PiB
2145-SA2
2145-SV2
4 PiB
2145-SV3 8 PiB
Safeguarded policies per system 32 Includes 3 predefined and 29 user-defined
Snapshot policies per system 32
Safeguarded volumes per system 15,864
Safeguarded volume groups per system 256
Safeguarded volumes per volume group 512
Replication Properties
Policy-based replication
Policy-based replication capacity per I/O group 2,048 TiB
Policy-based replication replicated volumes per system 7,932 Volumes per system / 2
Volume groups per system using policy-based replication 1,024 No limit beyond system limit - volume groups per system
Volumes per volume group using policy-based replication 512 No limit beyond system limit - volumes per volume group
Maximum round-trip latency for asynchronous policy-based replication that uses Fibre Channel partnerships 250ms
Maximum round-trip latency for asynchronous policy-based replication that uses IP partnerships 80ms
Maximum replication policies per system 32
Maximum I/O groups that use policy-based replication 2
IP Partnership Properties
Inter-cluster IP partnerships per system
3
A system can be partnered with up to three remote systems.
Inter-site links per IP partnership
2
A maximum of two inter-site links can be used between two IP partnership sites.
Ports per node
1
A maximum of one port per node can be used for IP partnership.
IP Partnership Software Compression Limit
140 MB/s
External Storage System Properties
Storage system WWNNs per system (cluster)
1,024
Storage system WWPNs per system (cluster)
1,024
WWNNs per storage system
16
WWPNs per WWNN
16
LUNs (managed disks) per storage system
-
No limit is imposed beyond the managed disks per system (cluster) limit
System and User Management Properties
User accounts per system
400
Includes the default user accounts
User groups per system
256
Includes the default user groups
Authentication services per system
1
DNS servers per system 2
NTP servers per system
1
iSNS servers per system
1
Concurrent OpenSSH sessions per system
32
Two Person Integrity Requests per system 4
Patches per cluster 7
Patches per node (Includes Installed, Obsolete, and Error) 30
Event notification Properties
SNMP servers per system
6
Syslog servers per system
6
Email (SMTP) servers per system
6
Email servers are used in turn until the email is successfully sent
Email users (recipients) per system
12
LDAP servers per system
6
REST API Properties
Threads per session
64
HTTP header size
16 KB
 

Extents

The following table compares the maximum volume, MDisk, and system capacity for each extent size.

Extent size (MB) 
Maximum non thin-provisioned volume capacity in GB
Maximum thin-provisioned volume capacity in GB (for regular pools) 
Maximum compressed volume size (for regular pools) ** 
Maximum thin-provisioned and compressed volume size in data reduction pools in GB
Maximum total thin-provisioned and compressed capacity for all volumes in a single data reduction pool per I/O group in GB
Maximum MDisk capacity in GB 
Maximum DRAID MDisk capacity in TB 
Total storage capacity manageable per system * 
16
2,048
(2 TB)
2,000
2 TB
2,048
(2 TB)
8,192
(8 TB)
2,048
(2 TB)
32
64 TB
32
4,096
(4 TB)
4,000
4 TB
4,096
(4 TB)
16,384
(16 TB)
4,096
(4 TB)
64
128 TB
64
8,192
(8 TB)
8,000
8 TB
8,192
(8 TB)
32,768
(32 TB)
8,192
(8 TB)
128
256 TB
128
16,384
(16 TB)
16,000
16 TB
16,384
(16 TB)
65,536
(64 TB)
16,384
(16 TB)
256
512 TB
256
32,768
(32 TB)
32,000
32 TB
32,768
(32 TB)
131,072
(128 TB)
32,768
(32 TB)
512
1 PB
512
65,536
(64 TB)
65,000
64 TB
65,536
(64 TB)
262,144
(256 TB)
65,536
(64 TB)
1,024
(1 PB)
2 PB
1,024
131,072
(128 TB)
130,000
96 TB ** 
131,072
(128 TB)
524,288
(512 TB)
131,072
(128 TB)
2,048
(2 PB)
4 PB
2,048
262,144
(256 TB)
260,000
96 TB ** 
262,144
(256 TB)
1,048,576
(1 PB)
262,144
(256 TB)
4,096
(4 PB)
8 PB
4,096
262,144
(256 TB)
260,000
96 TB ** 
262,144
(256 TB)
2,097,152
(2 PB)
524,288
(512 TB)
8,192
(8 PB)
16 PB
8,192
262,144
(256 TB)
260,000
96 TB ** 
262,144
(256 TB)
4,194,304
(4 PB)
1,048,576
(1 PB)
16,384
(16 PB)
32 PB


* The total capacity values assume that all of the storage pools in the system use the same extent size. 
** Refer to the following Flash

[{"Type":"MASTER","Line of Business":{"code":"LOB26","label":"Storage"},"Business Unit":{"code":"BU058","label":"IBM Infrastructure w\/TPS"},"Product":{"code":"STPVGU","label":"SAN Volume Controller"},"ARM Category":[{"code":"a8m0z000000bqPqAAI","label":"Documentation"}],"ARM Case Number":"","Platform":[{"code":"PF025","label":"Platform Independent"}],"Version":"All Versions"}]

Document Information

Modified date:
11 April 2023

UID

ibm16854955