Preventive Service Planning
Abstract
This document lists the configuration limits and restrictions specific to IBM Spectrum Virtualize for Public Cloud software version 8.5.2.x.
Content
Only the Azure release of IBM Spectrum Virtualize for Public Cloud is supported by v8.5.2.
Note: v8.5.2 is a Non-Long Term Support (Non-LTS) release. Non-LTS code levels are not intended to receive any PTFs. If issues are encountered, the only resolution is likely to be an upgrade to a later LTS or Non-LTS release.
Features not supported
The following features are not supported by the v8.5.2 release:
- Real-Time Compression
- Encryption
- Stretched Cluster
- HyperSwap
- DRAID
- NPIV
- Hot Spare Node
Supported Host Operating Systems
The following host operating systems are supported:
- Microsoft Windows 2019 / 2016;
- RHEL 8.4 / 8.2;
- SUSE 15.3;
- Ubuntu 20.04
Azure VMs
|
Model
|
vCPU
|
Memory (GiB)
|
Max number of data disks
|
Network Bandwidth (Gbps)
|
|---|---|---|---|---|
|
Standard_D16s_v3
|
16
|
64
|
31
|
8
|
|
Standard_D32s_v3
|
32
|
128
|
31
|
16
|
|
Standard_D64s_v3
|
64
|
256
|
31
|
30
|
Data Reduction Pools
The following restrictions apply for Data Reduction Pools (DRP):
- A volume in a DRP cannot be shrunk
- No volume can move between I/O groups when the volume is a DRP (use FlashCopy or Metro Mirror / Global Mirror instead).
- No split of a volume mirror to copy in a different I/O group
- Real/used/free/free/tier capacity is not reported per volume - only per pool.
IPv6
IPv6 addresses are not supported. Only IPv4 addresses can be used.
Quorum
Only IP Quorum devices are supported.
- The name of a volume group cannot be changed while a replication policy is assigned.
- The name of a volume cannot be changed while the volume is in a volume group with a replication policy assigned.
- Ownership groups are not supported by policy-based replication.
- Policy-based replication is not supported on HyperSwap topology systems.
- Policy-based replication cannot be used with volumes that are:
- Image mode
- HyperSwap
- Part of a remote-copy relationship
- Configured to use Transparent Cloud Tiering (TCT)
- VMware vSphere Virtual Volumes (vVols)
- Resize (expand or shrink)
- Migrate to image mode, or add an image mode copy
- Move to a different I/O group
Maximum Configurations
Configuration limits for Spectrum Virtualize for Public Cloud:
|
Property
|
Cloud Vendor
|
Maximum Limit
|
Comments
|
|---|---|---|---|
|
System (Cluster) Properties
|
|||
| Nodes per system (cluster) |
2
|
||
| I/O groups / Control Enclosures per system (cluster) |
1
|
||
| Inter-cluster partnerships per system |
3
|
A system can be partnered with up to three remote systems. No more than four systems can be in the same connected set | |
| IP Quorum devices per system |
5
|
||
| Portset Objects per system | 72 | FC + Ethernet | |
| IP address objects per system | 176 | Includes duplicated IP addresses | |
| IP addresses per port | 2 | Upon node failover, Ethernet ports that have the same ID are configured with all the IP addresses of the partner. Hence there can be a maximum of 128 IP addresses configured per Ethernet port during failover. | |
| IP address objects per node | 22 | ||
| Unique IP addresses per port | 2 | ||
| Routable IP addresses per port | 1 | ||
| IP addresses per node per portset | Host | 2 | |
| Remote Copy | 1 | ||
|
Node Properties
|
|||
| iSCSI sessions per node |
1,024
|
2048 in IP failover mode (when partner node is unavailable). This limit includes both iSCSI Host Attach AND iSCSI Initiator sessions |
|
|
Managed Disk Properties
|
|||
| Managed disks (MDisks) per system |
31
|
The maximum number of logical units that, can be managed by a cluster. MDisks are provisioned by Azure Storage. |
|
| Managed disks per storage pool (managed disk group) |
31
|
||
| Storage pools per system |
1,024
|
||
| Parent pools per system |
128
|
||
| Child pools per system |
1,023
|
||
| Managed disk extent size |
8,192 MB
|
||
| Capacity for an individual internal managed disk (Azure Managed Disk) |
32 TB
|
Azure Managed Disk limit | |
| Total storage capacity manageable per system |
992 TB
|
||
| Maximum Provisioning policies | 32 | ||
|
Volume (Virtual Disk) Properties
|
|||
| Basic volumes (VDisks) per system |
15,864
|
Each basic volume uses 1 VDisk, each with one copy. | |
|
Volumes per I/O group
(Volumes per caching I/O group)
|
15,864
|
||
| Volumes accessible per I/O group |
15,864
|
||
| Thin-provisioned (space-efficient) per-volume capacity for volumes in regular and data reduction pools |
992 TB
|
No limit is imposed here beyond the volume copies per system limit. | |
| Volumes per storage pool |
-
|
No limit is imposed beyond the volumes per system limit | |
| Fully allocated volume capacity |
256 TB
|
Maximum size for an individual fully allocated volume. Maximum size depends on the extent size of the Storage Pool. Comparison Table: Maximum Volume, MDisk, and System capacity for each extent size. |
|
| Thin-provisioned (space-efficient) volume capacity |
256 TB
|
Maximum size for an individual thin-provisioned volume Maximum size depends on the extent size of the Storage Pool. Comparison Table: Maximum Volume, MDisk, and System capacity for each extent size. |
|
| Host mappings per system |
20,000
|
See also - volume mappings per host object below | |
|
Mirrored Volume (Virtual Disk) Properties
|
|||
| Copies per volume |
2
|
||
| Volume copies per system |
15,864
|
The maximum number of volumes cannot all have the maximum number of copies | |
| Total mirrored volume capacity per I/O group |
496 TB
|
||
|
Host Properties
|
|||
| Host objects (IDs) per system |
256
|
||
| Host objects (IDs) per I/O group |
256
|
||
| Volume mappings per host object |
2,048
|
Although SV allows the mapping of up to 2048 volumes per host object, not all hosts are capable of accessing or managing this number of volumes. The practical mapping limit is restricted by the host OS, not SV.
Note: this limit does not apply to hosts of type adminlun (used to support VMware vvols). |
|
|
Host Cluster Properties
|
|||
| Host clusters per system |
512
|
||
| Hosts in a host cluster |
128
|
||
|
iSCSI Host Properties
|
|||
| iSCSI hosts per system |
256
|
||
| iSCSI hosts per I/O group |
256
|
||
| iSCSI names per host object |
4
|
||
| iSCSI names per I/O group |
512
|
||
| Data Reduction Pools | |||
| Data Reduction Pools per system | 4 | ||
| MDisks per Data Reduction Pool | 128 | ||
| Volume copies per Data Reduction Pool | 15,864 | ||
| Extents per I/O group per Data Reduction Pool | 524,288 (512k) | ||
|
Copy Services Properties
|
|||
| Total Metro Mirror, Global Mirror, and HyperSwap capacity per I/O group | 1,024 TB | ||
| Remote Copy (Metro Mirror and Global Mirror) relationships per system |
10,000
|
This can be any mix of Metro Mirror and Global Mirror relationships. | |
| Remote Copy migration relationships per system | 256 |
Note the I/O pause time at the start of each cycle increases in proportion to the number of relationships in the consistency group.
Refer to the Changes to support for Global Mirror with Change Volumes page for information relating to GMCV performance considerations and best practice.
|
|
| Remote Copy relationships per consistency group, for Global Mirror cycling mode relationships (also known GMCV) | 256 | Note the I/O pause time at the start of each cycle increases in proportion to the number of relationships in the consistency group. | |
| Remote Copy relationships per consistency group |
-
|
No limit is imposed beyond the Remote Copy relationships per system limit. Refer to the Changes to support for Global Mirror with Change Volumes page for information relating to GMCV performance considerations and best practice. |
|
| Global Mirror cycling mode relationships (also known as GMCV) per system, with cycle times less than 300 seconds | 256 | ||
| Global Mirror cycling mode relationships (also known GMCV) per system, with cycle times of 300 seconds or more | 2,500 | ||
| Remote Copy consistency groups per system |
256
|
||
| Maximum round-trip latency for Metro Mirror, HyperSwap, and Migration relationships | 3ms | ||
| Total Metro Mirror, Global Mirror, and HyperSwap capacity per IO group |
1024 TB
|
This limit is the total capacity for all master and auxiliary volumes in the I/O group. (Due to Azure Managed Disk limit) | |
| 3-site Remote Copy relationships per consistency group | 256 | ||
| 3-site Remote Copy consistency groups per system | 16 | ||
| 3-site Metro Mirror Remote Copy relationships per system | 2,500 | ||
| 3-site HyperSwap Remote Copy relationships per system | 2,000 | ||
| Total number of Global Mirror with Change Volumes relationships per system |
256
|
Change volumes used for active-active relationships do not count toward this limit. | |
| FlashCopy mappings per system |
15,864
|
||
| FlashCopy targets per source |
256
|
||
| FlashCopy mappings per consistency group |
512
|
||
| FlashCopy consistency groups per system |
500
|
||
| Total FlashCopy volume capacity per I/O group |
4 PB
|
||
| Snapshots Per System | 15,863 | ||
| Snapshots Per Volume Copy | 15,863 | ||
| Thin-Clone, Clone Volumes Per System | 15,862 | ||
| Thin-Clone Volumes Per Source Volume | 15,862 | ||
| Safeguarded policies per system | 32 | ||
| Snapshot policies per system | 32 | ||
| Policy-based replication | |||
| Policy-based replication capacity per I/O group | 1,024 TiB | ||
| Policy-based replication replicated volumes per system | 7,932 | ||
| Volume groups per system that uses policy-based replication | 1,024 | No limit beyond system limit - volume groups per system | |
| Volumes per volume group that uses policy-based replication | 512 | No limit beyond system limit - volumes per volume group | |
| Maximum round-trip latency for asynchronous policy-based replication that uses IP partnerships | 80ms | ||
| Maximum replication policies per system | 32 | ||
| Maximum I/O groups that use policy-based replication | 1 | Limited by platform maximum | |
|
IP Partnership Properties
|
|||
| Inter-cluster IP partnerships per system |
3
|
A system can be partnered with up to three remote systems. A maximum of one of these can be IP. | |
| Inter-site links per IP partnership |
2
|
A maximum of two inter-site links can be used between two IP partnership sites. | |
| Ports per node |
1
|
A maximum of one port per node can be used for IP partnership. | |
|
External Storage System Properties
|
|||
| LUNs (managed disks) per storage system |
-
|
No limit is imposed beyond the available max size | |
|
System and User Management Properties
|
|||
| User accounts per system |
400
|
Includes the default user accounts | |
| User groups per system |
256
|
Includes the default user groups | |
| Authentication servers per system |
1
|
||
| DNS servers per system | 2 | ||
| NTP servers per system |
1
|
||
| iSNS servers per system |
1
|
||
| Concurrent OpenSSH sessions per system |
32
|
||
|
Event notification Properties
|
|||
| SNMP servers per system |
6
|
||
| Syslog servers per system |
6
|
||
| Email (SMTP) servers per system |
6
|
Email servers are used in turn until the email is successfully sent | |
| Email users (recipients) per system |
12
|
||
| LDAP servers per system |
6
|
||
|
REST API Properties
|
|||
| Threads per session |
64
|
||
| HTTP header size |
16 KB
|
||
Extents
The following table compares the maximum volume, MDisk, and system capacity for each extent size.
|
Extent size (MB)
|
Maximum non thin-provisioned volume capacity in GB
|
Maximum thin-provisioned volume capacity in GB
|
Maximum MDisk capacity in GB
|
Maximum DRAID MDisk capacity in TB
|
Total storage capacity manageable per system*
|
|
16
|
2,048 (2 TB)
|
2,000
|
2,048 (2 TB)
|
32
|
64 TB
|
|
32
|
4,096 (4 TB)
|
4,000
|
4,096 (4 TB)
|
64
|
128 TB
|
|
64
|
8,192 (8 TB)
|
8,000
|
8,192 (8 TB)
|
128
|
256 TB
|
|
128
|
16,384 (16 TB)
|
16,000
|
16,384 (16 TB)
|
256
|
512 TB
|
|
256
|
32,768 (32 TB)
|
32,000
|
32,768 (32 TB)
|
512
|
1 PB
|
|
512
|
65,536 (64 TB)
|
65,000
|
65,536 (64 TB)
|
1,024 (1 PB)
|
2 PB
|
|
1024
|
131,072 (128 TB)
|
130,000
|
131,072 (128 TB)
|
2,048 (2 PB)
|
4 PB
|
|
2048
|
262,144 (256 TB)
|
260,000
|
262,144 (256 TB)
|
4,096 (4 PB)
|
8 PB
|
|
4096
|
262,144 (256 TB)
|
262,144
|
524,288 (512 TB)
|
8,192 (8 PB)
|
16 PB
|
|
8192
|
262,144 (256 TB)
|
262,144
|
1,048,576 (1024 TB)
|
16,384 (16 PB)
|
32 PB
|
* The total capacity values assume that all of the storage pools in the system use the same extent size.
Was this topic helpful?
Document Information
Modified date:
11 April 2023
UID
ibm16620937