Preventive Service Planning
Abstract
This document lists the configuration limits and restrictions specific to IBM Storwize V5000E and V5100 software version 8.5.0.x
Content
The use of WAN optimization devices such as Riverbed is not supported in native Ethernet IP partnership configurations containing Storwize V5000.
Safeguarded Copy
Requires the purchase of the additional FlashCopy license.
The following restrictions apply for Safeguarded Copy:
- Mirrored volumes cannot be safeguarded. Stretched cluster is not supported
- Mirroring of existing safeguarded source volumes is supported for migration purposes only
- HyperSwap volumes are supported. However, recovery requires that they be converted to regular volumes before use
- Pre-defined schedules are designed to avoid running out of FlashCopy maps in a single graph and keep within the supported volumes count. It is possible to create policies (that use the CLI only) that can, potentially, breach those limits. Caution must be exercised
- The GUI does not support creating user-defined policies but can display any created that use the CLI.
- The source volume cannot be in an ownership group
- The source volume cannot be used with Transparent Cloud Tiering (TCT).
Volume Mobility
The following restrictions apply for Volume Mobility (nondisruptive volume move between systems):
- No 3-site support
- Not intended to be a DR or HA solution
- No support for consistency groups, change volumes, or expanding volumes
- Reduced host interoperability support. Only the following host operating systems are supported:
- RHEL
- SLES
- ESXi
- Solaris
- HP-UX.
- SCSI only. Fibre Channel and iSCSI supported. NVMe not supported
- No SCSI persistent reservations or Offloaded Data Transfer (ODX).
Data Reduction Pools
- VMware vSphere Virtual Volumes (vVols) are not supported in a DRP
- A volume in a DRP cannot be shrunk
- No volume move between I/O groups if volume in a DRP (use FlashCopy or Metro Mirror / Global Mirror instead)
- No split of a volume mirror to copy in a different I/O group
- Real/used/free/free/tier capacity is not reported per volume - only per pool.
Traditional RAID
V5010E, V5030E, and V5100 systems do not support either RAID-5 or RAID-6 traditional RAID arrays.
DRAID Strip Size
For candidate drives, with a capacity greater than 4 TB, a strip size of 128 cannot be specified for either RAID-5 or RAID-6 DRAID arrays. These drives must have a strip size of 256.
Non-Disruptive Volume Move (NDVM)
The following Fibre Channel attached host types are supported for nondisruptively moving a volume between I/O groups (control enclosures):
Host Operating System | Host Multipathing | Host Clustering | Notes |
---|---|---|---|
AIX 7.2 | AIXPCM |
Nondisruptive volume move can result in the same volume being mapped to different hosts in the same host cluster by using different SCSI IDs. If the host cluster cannot tolerate this configuration, then nondisruptive volume move cannot be used.
|
SAN boot is supported
NPIV is supported
|
Microsoft Windows 2019 | MSDSM |
Hyper-V Failover Cluster
|
SAN boot is supported
|
Microsoft Windows 2016 | MSDSM |
Hyper-V Failover Cluster
|
SAN Boot is supported
|
Red Hat 8 | Native |
The original paths might need to be manually removed on the host before removing access to the old I/O group
|
|
SLES 15 | Native | The original paths might need to be manually removed on the host before removing access to the old I/O group | |
VMware 6.7 | Native | VAAI is supported | |
VMware 6.5 | Native | VAAI is supported | |
Solaris 11.3 SPARC | MPXIO | SAN boot is supported |
Note: For all other host types, I/O needs to be quiesced before moving a volume.
When moving a volume that is mapped to a host cluster, it is required to rescan disk paths on all host cluster nodes to ensure the new paths are detected before removing access from the original I/O group.
Clustered Systems
A system requires native Fibre Channel SAN or alternatively 8 Gbps/16 Gbps Direct Attach Fibre Channel connectivity for communication between all nodes in the local cluster. Clustering can also be accomplished with 25 Gbps Ethernet, for standard topologies. This is supported on Storwize V5100 systems only.
Partnerships between systems for Metro Mirror or Global Mirror replication can be used with both Fibre Channel and native Ethernet connectivity. Distances greater than 300 meters are supported by using an FCIP link or Fibre Channel between source and target.
All systems within a cluster must be using the same version of Storwize software.
NPIV (N_Port ID Virtualization)
The following recommendations and restrictions can be followed when implementing NPIV:
FCoE is not supported by NPIV.
Operating systems not currently supported for use with NPIV:
- HPUX 11iV2
- Veritas DMP multipathing on Windows with RAID-5 volumes in VxVM
Other Operating Systems
Other operating Systems might also experience the same issue when modifying the NPIV state from "Transitional" to "Disabled", in which case the operating system-specific rescan command can be used.
Fabric Attachment
NPIV mode is only supported when used with Brocade or Cisco Fibre Channel SAN switches that are NPIV capable.
HyperSwap
Configure your host multipath driver to use an ALUA-based path policy.
Due to the requirement for multiple access I/O groups, SAS attached host types are not supported by HyperSwap volumes.
A volume configured with multiple access I/O groups, on a system in the storage layer, cannot be virtualized by a system in the replication layer. This restriction prevents a HyperSwap volume on one system being virtualized by another.
AIX Live Partition Mobility (LPM)
AIX LPM is supported by the HyperSwap function and AIX 7.x
Direct Attachment
IBM System Storage DS8000 series is not supported by direct-attached systems.
SAN boot on Windows 2019 (Qlogic HBA) is not supported by 32 GB direct-attached systems.
16 Gbps Fibre Channel Node Connection
Refer to the IBM System Storage Inter-operation Center (SSIC) for supported 16 Gbps Fibre Channel configurations supported by 16 Gbps node hardware.
Note 16 Gbps Node hardware is supported when connected to Brocade and Cisco 8 Gbps or 16 Gbps fabrics only.
Direct connections to 2 Gbps or 4 Gbps SAN or direct host attachment to 2 Gbps or 4 Gbps ports is not supported.
Other configured switches that are not directly connected to the 16 Gbps Node hardware can be any supported fabric switch as currently listed in the SSIC.
25 Gbps Ethernet Canister Connection
One optional 2-port 25 Gbps Ethernet adapter is supported in each node canister for iSCSI communication with iSCSI capable Ethernet ports in hosts by Ethernet switches. These 2-port 25 Gbps Ethernet adapters do not support FCoE.
There are two types of 25 Gbps Ethernet adapter Feature supported:
1) RDMA over Converged Ethernet (RoCE)
2) Internet Wide-area RDMA Protocol (iWARP)
Either works for standard iSCSI communications with hosts, for example, by not using Remote Direct Memory Access (RDMA).
When use of RDMA with a 25 Gbps Ethernet adapter becomes possible then RDMA links work between RoCE ports or between iWARP ports (that is, from a RoCE node canister port to a RoCE port on a host or from an iWARP node canister port to an iWARP port on a host).
For Ethernet switches and adapters supported in hosts, visit the SSIC
Example of an RoCE adapter for use in a host
Example of an iWARP adapter for use in a host
IP Partnership
IP partnerships are supported on any of the available Ethernet ports. Using an Ethernet switch to convert a 25 Gb to a 1 Gb IP partnership, or a 10 Gb to a 1 Gb IP partnership is not supported. Therefore, the IP infrastructure on both partnership sites must match. Bandwidth limiting on IP partnerships between both sites is supported.
VMware vSphere Virtual Volumes (vVols)
The maximum number of virtual machines on a single VMware ESXi host in a Storwize / vVol storage configuration is limited to 680.
The use of VMware vSphere Virtual Volumes (vVols) on a system that is configured for HyperSwap is not currently supported by SVC / Storwize.
Host Limitations
SAN Boot function on AIX 7.2 TL5
SAN Boot is not supported for AIX 7.2 TL5 when connected by using the NVME/FC protocol.
RDM Volumes attached to guests in VMware 7.0
Using RDM (raw device mapping) volumes attached to any guests, with the RoCE iSER protocol, results in pathing issues or inability to boot the guest.
N2225/N2226 SAS HBA
VMware 6.7 (Guest O/S SLES12SP4) connected by SAS N2225/N2226 host adapters are not supported.
Lenovo 430-16e/8e SAS HBA
VMware 6.7 and 6.5 (Guest O/S SLES12SP4) connected by SAS Lenovo 430-16e/8e host adapters are not supported.
Windows 2019 and 2016 connected by SAS Lenovo 430-16e/8e host adapters are not supported.
Windows 2016 HyperV
RHEL v7.1 guests on Windows 2016 HyperV, with Virtual Fibre Channel, are not supported.
iSER
Operating systems not currently supported for use with iSER:
- Windows 2012 R2 with Mellanox ConnectX-4 Lx EN
- Windows 2016 with Mellanox ConnectX-4 Lx EN
Windows NTP server
The Linux NTP client used by SAN Volume Controller may not always function correctly with Windows W32Time NTP Server
Microsoft Offload Data Transfer (ODX)
V8.5.0 does not support ODX. Systems that use ODX cannot be upgraded to v8.5.0.
Fabric Limitation
Only one FCF (Fibre Channel Forwarder) switch per fabric is supported.
Storage connected directly to a Cisco Fabric Extender (FEX) is not supported.
Priority Flow Control for iSCSI / iSER
Priority Flow Control for iSCSI / iSER is supported on Emulex & Chelsio adapters (SVC supported) with all DCBX enabled switches.
SCSI LUN ID 0
SAS hosts running Linux or VMware operating systems.
Removal of LUNs mapped to SCSI ID 0 is not supported and might result in a loss of access to the remaining LUNs
Maximum Configurations
Configuration limits for Storwize V5000E and V5100:
Property |
Hardware Type
|
Maximum Number
|
Comments |
System (Cluster) Properties
|
|||
Control enclosures per system (cluster) |
V5010E
|
1
|
Each control enclosure contains two node canisters |
V5030E/V5100
|
2
|
||
Nodes per system |
V5010E
|
2
|
|
V5030E/V5100
|
4
|
Arranged as two I/O groups | |
Nodes per fabric |
64
|
Maximum number of SVC and V5000 nodes that can be present on the same Fibre Channel fabric, with visibility of each other | |
Fabrics per system |
6
|
The number of counterpart Fibre Channel SANs that are supported - Up to 4 fabrics that use native Fibre Channel ports - Up to 2 fabrics that use FCoE ports |
|
Inter-cluster partnerships per system |
3
|
A system can be partnered with up to three remote systems. No more than four systems can be in the same connected set. | |
IP Quorum devices per system |
5
|
||
Node Properties
|
|||
Logins per node Fibre Channel WWPN |
512
|
Includes logins from server HBAs, disk controller ports, node ports within the same system and node ports from remote systems | |
Fibre Channel buffer credits per port | 8 Gbps FC adapter |
255
|
The number of credits granted by the switch to the node |
16 Gbps FC adapter |
4095
|
||
Portset objects per system | 72 | FC + Ethernet | |
IP address objects per system | V5010E | 512 | Includes duplicated IP addresses |
V5030E/V5100 | 1024 | ||
IP address objects per node | 256 | ||
IP addresses per port | 64 |
When node failover occurs, Ethernet ports that have the same ID are configured with the IP addresses of the partner. Hence there can be a maximum of 128 IP addresses configured per Ethernet port during failover.
For Emulex ports, there can be a maximum of 3 unique VLANs per port and a maximum of 32 IP addresses per port.
For Mellanox iSER connectivity, there can be a maximum of 31 VLANs per port and a maximum of 31 IP addresses per port with VLAN.
|
|
Routable IP addresses per port | 1 | ||
iSCSI sessions per node |
1024
|
2048 in IP failover mode (when partner node is unavailable). This limit includes both iSCSI Host Attach AND iSCSI Initiator sessions |
|
Managed Disk Properties
|
|||
Managed disks (MDisks) per system |
4096
|
The maximum number of logical units that can be managed by a system, including internal arrays. Internal distributed arrays consume 16 logical units. This number also includes external MDisks that are configured into storage pools (managed disk groups) |
|
Managed disks per storage pool (managed disk group) |
128
|
||
Storage pools per system |
1024
|
||
Parent pools per system |
128
|
||
Child pools per system |
1023
|
||
Managed disk extent size |
8192 MB
|
||
Capacity for an individual internal managed disk (array) |
-
|
No limit is imposed beyond the maximum number of drives per array limits. Maximum size depends on the extent size of the Storage Pool. Comparison Table: Maximum Volume, MDisk, and System capacity for each extent size. |
|
Capacity for an individual external managed disk |
1 PB
|
Note: External managed disks larger than 2 TB are only supported for certain types of storage systems. Refer to the supported hardware matrix for further details. Maximum size depends on the extent size of the Storage Pool. Comparison Table: Maximum Volume, MDisk, and System capacity for each extent size. |
|
Total storage capacity manageable per system |
32 PB
|
Maximum requires an extent size of 8192 MB to be used This limit represents the per system maximum of 2^22 extents. Comparison Table: Maximum Volume, MDisk, and System capacity for each extent size. |
|
Data Reduction Pool Properties
|
|||
Data Reduction Pools per system |
4
|
||
MDisks per Data Reduction Pool |
128
|
||
Volume copies per data reduction pool |
V5030E/V5100
|
8192 - (Number of Data Reduction Pools * 12)
|
|
V5010E
|
4096 - (Number of Data Reduction Pools * 12)
|
||
Extents per I/O group per Data Reduction Pool |
524288
|
||
Volume (Virtual Disk) Properties
|
|||
Basic Volumes (VDisks) per system |
V5030E/V5100
|
8192
|
Each Basic Volume uses 1 VDisk, each with one copy. Maximum requires a system containing two control enclosures; refer to the volumes per I/O group limit |
V5010E
|
2048
|
Each Basic Volume uses 1 VDisk, each with one copy. | |
HyperSwap volumes per system |
V5030E
|
1250
|
Each HyperSwap volume uses 4 VDisks, each with one copy, 1 active-active remote copy relationship, and 4 FlashCopy mappings. |
Volumes per I/O group (volumes per caching I/O group) |
V5030E/V5100
|
8192
|
|
V5010E
|
2048
|
||
Volumes accessible per I/O group |
V5030E/V5100
|
8192
|
|
V5010E
|
2048
|
||
Thin-provisioned (space-efficient) volume copies in regular pools per system |
-
|
No limit is imposed here beyond the volume copies per system limit. | |
Compressed volume copies in data reduction pools per system |
V5030E/V5100
|
-
|
No limit is imposed here beyond the volume copy limit per data reduction pool.
V5030E systems required 32 GB to support compressed volumes
|
Compressed volume copies in data reduction pools per I/O group |
V5030E/V5100
|
-
|
No limit is imposed here beyond the volume copy limit per data reduction pool.
V5030E systems required 32 GB to support compressed volumes
|
Deduplicated volume copies in data reduction pools per system |
V5030E/V5100
|
-
|
No limit is imposed here beyond the volume copy limit per data reduction pool.
V5030E systems required 32 GB to support deduplicated volumes
|
Deduplicated volume copies in data reduction pools per I/O group |
V5030E/V5100
|
-
|
No limit is imposed here beyond the volume copy limit per data reduction pool.
V5030E systems required 32 GB to support deduplicated volumes
|
Volumes per storage pool |
-
|
No limit is imposed beyond the volumes per system limit | |
Fully allocated volume capacity |
256 TB
|
Maximum size for an individual fully allocated volume.
Maximum size depends on the extent size of the Storage Pool. Comparison Table: Maximum Volume, MDisk, and System capacity for each extent size. |
|
Thin-provisioned (space-efficient) per-volume capacity for volumes copies in regular and data reduction pools |
256 TB
|
Maximum size for an individual thin-provisioned volume. Maximum size depends on the extent size of the Storage Pool. Comparison Table: Maximum Volume, MDisk, and System capacity for each extent size. |
|
Maximum HyperSwap volume capacity in a single I/O group by using RAID |
850 TiB
|
This limit depends on the bitmap allocation for mirroring and replication in each I/O group.
See the IBM Documentation for details.
|
|
Host mappings per system |
20,000
|
See also - volume mappings per host object | |
Mirrored Volume (Virtual Disk) Properties
|
|||
Copies per volume |
2
|
||
Volume copies per system |
V5030E/V5100
|
8192
|
|
V5010E
|
4096
|
||
Total mirrored volume capacity per I/O group |
1 PB
|
||
Host Properties
|
|||
Host objects (IDs) per system |
512
|
A host object can contain both Fibre Channel ports and iSCSI names | |
Host objects (IDs) per I/O group |
256
|
Refer to the additional Fibre Channel and iSCSI host limits | |
Volume mappings per host object |
512
|
||
Host Cluster Properties
|
|||
Host clusters per system |
512
|
||
Hosts in a host cluster |
128
|
||
Fibre Channel Host Properties (including hosts attached that use FCoE)
|
|||
Fibre Channel hosts per system |
512
|
||
Fibre Channel host ports per system |
4096
|
||
Fibre Channel hosts per I/O group |
256
|
||
Fibre Channel host ports per I/O group |
2048
|
||
NPIV Direct Attach Logins per Fibre Channel WWPN | 16 | ||
Fibre Channel host ports per host object (ID) |
32
|
||
iSCSI Host Properties
|
|||
iSCSI hosts per system |
1024
|
||
iSCSI hosts per I/O group |
256
|
||
iSCSI names per host object (ID) |
4
|
||
iSCSI names per I/O group |
512
|
||
NVMe over Fibre Channel Host Properties
|
|||
FC-NVMe hosts per system |
V5100
|
64
|
This limit is not policed by the Spectrum Virtualize software. Any configurations that exceed this limit can experience significant adverse performance impact. |
FC-NVMe hosts per I/O group |
V5100
|
16
|
This limit is not policed by the Spectrum Virtualize software. Any configurations that exceed this limit can experience significant adverse performance impact. |
Fibre Channel Logins per FC-NVMe WWPN | V5100 | 16 | This limit is the number of FC2 logins supported. |
NVMe Qualified Names (NQNs) per host object (ID) |
V5100
|
2
|
|
NVMe over RDMA hosts per system | V5100 | 256 | |
NVMe over RDMA hosts per I/O group | V5100 | 256 | |
Primary RDMA connections per port | V5100 | 256 | |
Copy Services Properties
|
|||
Remote Copy (Metro Mirror and Global Mirror) relationships per system |
4096
|
A mix of Metro Mirror and Global Mirror relationships is allowed | |
Active-Active Relationships |
1250
|
Limit for the number of HyperSwap volumes in a system | |
Remote Copy relationships per consistency group |
-
|
No limit is imposed beyond the Remote Copy relationships per system limit. Refer to the Changes to support for Global Mirror with Change Volumes page for information relating to GMCV performance considerations and best practice. |
|
Remote Copy consistency groups per system |
256
|
||
Total Metro Mirror, Global Mirror, and HyperSwap capacity per I/O group |
1 PiB
|
This limit is the total capacity for all master and auxiliary volumes in the I/O group. | |
Total number of Global Mirror with Change Volumes relationships per system |
256
|
60s cycle time (Change volumes used for active-active relationships do not count toward this limit). | |
256
|
300s cycle time (Change volumes used for active-active relationships do not count toward this limit). | ||
FlashCopy mappings per system |
8192
|
||
FlashCopy targets per source |
256
|
||
FlashCopy mappings per consistency group |
512
|
||
FlashCopy consistency groups per system |
500
|
||
Total FlashCopy volume capacity per I/O group |
4 PB
|
||
FlashCopy relationships per graph (backups per source) | 256 | ||
3-site Remote Copy (Metro Mirror) relationships per consistency group |
256
|
||
3-site Remote Copy (Metro Mirror) consistency groups per system |
16
|
||
3-site Remote Copy (Metro Mirror) relationships per system |
1024
|
||
Safeguarded volumes per system | V5030E/V5100 |
8192
|
|
V5010E |
2048
|
||
Safeguarded volume groups per system |
256
|
||
Safeguarded volumes per volume group |
512
|
||
Safeguarded policies per system |
32
|
Includes 3 predefined and 29 user-defined policies | |
IP Partnership Properties
|
|||
Inter-cluster IP partnerships per system |
3
|
A system can be partnered with up to three remote systems. | |
Inter-site links per IP partnership |
2
|
A maximum of two inter-site links can be used between two IP partnership sites. | |
Ports per node |
1
|
A maximum of one port per node can be used for IP partnership. | |
IP partnership Software Compression Limit |
V5030E
|
140 MB/s
|
|
Internal Storage Properties
|
|||
SAS chains per control enclosure |
V5010E
|
1
|
|
V5030E
|
2
|
||
Enclosures per SAS chain |
V5010E
|
10
|
|
V5030E
|
10
|
||
Expansion enclosures per control enclosure |
V5010E
|
10
|
|
V5030E
|
20
|
||
Drives per I/O group |
V5010E
|
392
|
|
V5030E
|
504
|
||
V5100
|
760
|
||
Drives per system |
V5010E
|
392
|
|
V5030E
|
1008
|
Maximum requires a system containing two control enclosures, each with the maximum number of expansion enclosures | |
V5100
|
1520
|
||
SCM drives per I/O group | 12 | ||
Non-Distributed RAID Array Properties
|
|||
Arrays per system |
128
|
||
Drives per array |
16
|
||
Min-Max member drives per RAID-0 array |
1-8
|
||
Min-Max member drives per RAID-1 array |
2-2
|
||
Min-Max member drives per RAID-5 array |
3-16
|
||
Min-Max member drives per RAID-6 array |
5-16
|
||
Min-Max member drives per RAID-10 array |
2-16
|
||
Hot spare drives |
-
|
No limit is imposed | |
Distributed RAID Array Properties
|
|||
Arrays per system | V5030E/V5100 |
20
|
The presence of non-DRAID arrays reduces this limit |
Encrypted arrays per system | V5030E/V5100 |
20
|
The presence of non-DRAID arrays reduces this limit |
Arrays per I/O group |
10
|
The presence of non-DRAID arrays reduces this limit | |
Drives per array |
128
|
||
Min-Max member drives per RAID-5 array |
4-128
|
||
Min-Max member drives per RAID-6 array |
6-128
|
||
Rebuild areas per array |
1-4
|
||
Min-Max stripe width for RAID-5 array |
3-16
|
||
Min-Max stripe width for RAID-6 array |
5-16
|
||
Max drive capacity for RAID-5 array |
8 TB
|
||
Drives added to an array in a single DRAID expansion |
12
|
||
Concurrent DRAID expansions per system |
4
|
||
Concurrent DRAID expansions per parent storage pool |
1
|
||
External Storage System Properties
|
|||
Storage system WWNNs per system (cluster) |
1024
|
||
Storage system WWPNs per system (cluster) |
1024
|
||
WWNNs per storage system |
16
|
||
WWPNs per WWNN |
16
|
||
LUNs (managed disks) per storage system |
-
|
No limit is imposed beyond the managed disks per system limit | |
System and User Management Properties
|
|||
User accounts per system |
V5100
|
400
|
Includes the default user accounts |
V5010E/V5030E
|
200
|
||
User groups per system |
256
|
Includes the default user groups | |
Authentication servers per system |
1
|
||
NTP servers per system |
1
|
||
iSNS servers per system |
1
|
||
Concurrent OpenSSH sessions per system |
32
|
||
Event notification Properties
|
|||
SNMP servers per system |
6
|
||
Syslog servers per system |
6
|
||
Email (SMTP) servers per system |
6
|
Email servers are used in turn until the email is successfully sent | |
Email users (recipients) per system |
12
|
||
LDAP servers per system |
6
|
||
REST API Properties
|
|||
Maximum active connections per cluster |
4
|
RESTful API | |
Maximum requests/sec to auth endpoint |
3
|
RESTful API | |
Maximum requests/sec to command endpoints | 10 | RESTful API | |
Number of simultaneous CLIs in progress | 1 | System |
Extents
The following table compares the maximum volume, MDisk, and system capacity for each extent size.
Extent size (MB)
|
Maximum non thin-provisioned volume capacity in GB
|
Maximum thin-provisioned volume capacity in GB (for regular pools)
|
Maximum compressed volume size (for regular pools) **
|
Maximum thin-provisioned and compressed volume size in data reduction pools in GB
|
Maximum total thin-provisioned and compressed capacity for all volumes in a single data reduction pool per IOgroup in GB
|
Maximum MDisk capacity in GB
|
Maximum DRAID MDisk capacity in TB
|
Total storage capacity manageable per system *
|
16
|
2,048
(2 TB)
|
2,000
|
2 TB
|
2,048
(2 TB)
|
8,192
(8 TB)
|
2,048
(2 TB)
|
32
|
64 TB
|
32
|
4,096
(4 TB)
|
4,000
|
4 TB
|
4,096
(4 TB)
|
16,384
(16 TB)
|
4,096
(4 TB)
|
64
|
128 TB
|
64
|
8,192
(8 TB)
|
8,000
|
8 TB
|
8,192
(8 TB)
|
32,768
(32 TB)
|
8,192
(8 TB)
|
128
|
256 TB
|
128
|
16,384
(16 TB)
|
16,000
|
16 TB
|
16,384
(16 TB)
|
65,536
(64 TB)
|
16,384
(16 TB)
|
256
|
512 TB
|
256
|
32,768
(32 TB)
|
32,000
|
32 TB
|
32,768
(32 TB)
|
131,072
(128 TB)
|
32,768
(32 TB)
|
512
|
1 PB
|
512
|
65,536
(64 TB)
|
65,000
|
64 TB
|
65,536
(64 TB)
|
262,144
(256 TB)
|
65,536
(64 TB)
|
1,024
(1 PB)
|
2 PB
|
1,024
|
131,072
(128 TB)
|
130,000
|
96 TB **
|
131,072
(128 TB)
|
524,288
(512 TB)
|
131,072
(128 TB)
|
2,048
(2 PB)
|
4 PB
|
2,048
|
262,144
(256 TB)
|
260,000
|
96 TB **
|
262,144
(256 TB)
|
1,048,576
(1 PB)
|
262,144
(256 TB)
|
4,096
(4 PB)
|
8 PB
|
4,096
|
262,144
(256 TB)
|
260,000
|
96 TB **
|
262,144
(256 TB)
|
2,097,152
(2 PB)
|
524,288
(512 TB)
|
8,192
(8 PB)
|
16 PB
|
8,192
|
262,144
(256 TB)
|
260,000
|
96 TB **
|
262,144
(256 TB)
|
4,194,304
(4 PB)
|
1,048,576
(1 PB)
|
16,384
(16 PB)
|
32 PB
|
* The total capacity values assume that all of the storage pools in the system use the same extent size.
** See the following Flash
Was this topic helpful?
Document Information
Modified date:
16 June 2023
UID
ibm16539880