Preventive Service Planning
Abstract
This document lists the configuration limits and restrictions specific to IBM Storwize V7000 Unified software version 1.4
Content
- Maximum Configurations
- Restrictions
- Configuration Restrictions
- Function Restrictions
- IBM Real-time Compression for file workloads
- Hierarchical Storage Management (HSM)
- Asynchronous Replication Behavior with Filesets
- Asynchronous Replication Across Code Levels
- Home Cache setup with HSM
- Storwize V7000 Unified Software Upgrade
- Storwize V7000 Block Function Restrictions for File System Volumes
- DS4000 Maintenance
- Host Limitations for File I/O Access
- Host Limitations for Block I/O Access
Maximum Configurations
Configuration limits for Storwize V7000 Unified software version 1.4:
Property | Limit | Comments |
File Module Properties | ||
Maximum number of file modules | 2 | Must be attached to V7000 I/O group 0 |
Maximum size of a single shared file system | 8 PB | |
Maximum number of file systems within one system | 64 | |
Maximum size of a single file | 8 PB | |
Maximum number of files per file system | 4 x 109 | |
Maximum number of volumes per file system | 511 | |
Maximum file system volumes per system | 511 | |
Maximum number of snapshots per file system | 256 | |
Maximum number of snapshots per file set | 256 | |
Maximum number of exports that can be created for CIFS service | 10000 | Supported with Storwize V7000 Unified version 1.4.2.0 and later. For earlier versions, the limit is 1000. |
Maximum number of exports that can be created per service (NFS, FTP, SCP, and HTTPS) | 1000 | |
Maximum number of administrative user groups | 128 | |
Maximum number of administrative users per administrative user group | 30 | |
Maximum number of administrative user groups per administrative user | 30 | |
Maximum number of different authentication server integrations | 1 | Supported authentication services are AD, LDAP, Samba PDC and Local Authentication |
Maximum number of Local Authentication data access users | 1000 | |
Maximum number of Local Authentication data access user groups | 100 | |
Maximum number of Local Authentication data access group memberships per data access user | 16 | |
Maximum number of CIFS connections supporting Microsoft FSCT workloads | 2000 | Supported with Storwize V7000 Unified version 1.4.2.0 and later |
Maximum number of active but idle CIFS connections | 10000 | Supported with Storwize V7000 Unified version 1.4.2.0 and later |
Configuration limits for Storwize V7000 software version 7.2.0:
Property | Maximum Number | Comments |
System (Cluster) Properties | ||
Control enclosures per system (cluster) | 4 | Each control enclosure contains two node canisters |
Nodes per system | 8 | Arranged as four I/O groups |
Nodes per fabric | 64 | Maximum number of SVC and V7000 nodes that can be present on the same Fibre Channel fabric, with visibility of each other |
Fabrics per system | 6 | The number of counterpart Fibre Channel SANs which are supported - Up to 4 fabrics using native Fibre Channel ports - Up to 2 fabrics using FCoE ports |
Inter-cluster partnerships per system | 3 | A system may partnered with up to three remote systems. No more than four systems may be in the same connected set |
Node Properties | ||
Logins per node Fibre Channel port | 512 | Includes logins from server HBAs, disk controller ports, node ports within the same system and node ports from remote systems |
iSCSI sessions per node | 256 | 512 in IP failover mode (when partner node is unavailable) |
Managed Disk Properties | ||
Managed disks (MDisks) per system | 4096 | The maximum number of logical units which can be managed by a system, including internal arrays. This number also includes external MDisks which have not been configured into storage pools (managed disk groups) |
Managed disks per storage pool (managed disk group) | 128 | |
Storage pools per system | 128 | |
Managed disk extent size | 8192 MB | |
Capacity for an individual internal managed disk (array) | 1 PB | No limit is imposed beyond the maximum number of drives per array limits. Maximum size is dependent on the extent size of the Storage Pool. Comparison Table: Maximum Volume, MDisk and System capacity for each extent size. |
Capacity for an individual external managed disk | 1 PB | Note: External managed disks larger than 2 TB are only supported for certain types of storage systems. Refer to the supported hardware matrix for further details. Maximum size is dependent on the extent size of the Storage Pool. Comparison Table: Maximum Volume, MDisk and System capacity for each extent size. |
Total storage capacity manageable per system | 32 PB | Maximum requires an extent size of 8192 MB to be used This limit represents the per system maximum of 2^22 extents. Comparison Table: Maximum Volume, MDisk and System capacity for each extent size. |
Volume (Virtual Disk) Properties | ||
Volumes (VDisks) per system | 8192 | Maximum requires a system containing four control enclosures; refer to the volumes per I/O group limit below |
Volumes per I/O group (volumes per caching I/O group) | 2048 | |
Volumes accessible per I/O group | 8192 | |
Thin-provisioned (space-efficient) volume copies per system | 8192 | No limit is imposed here beyond the volume copies per system limit. |
Compressed volume copies per system | 800 | Maximum requires a system containing four control enclosures; refer to the compressed volume copies per I/O group limit below |
Compressed volume copies per I/O group | 200 | |
Volumes per storage pool | - | No limit is imposed beyond the volumes per system limit |
Fully-allocated volume capacity | 256 TB | Maximum size for an individual fully-allocated volume. Maximum size is dependent on the extent size of the Storage Pool. Comparison Table: Maximum Volume, MDisk and System capacity for each extent size. |
Thin-provisioned (space-efficient) volume capacity | 256 TB | Maximum size for an individual thin-provisioned volume. Maximum size is dependent on the extent size of the Storage Pool. Comparison Table: Maximum Volume, MDisk and System capacity for each extent size. |
Host mappings per system | 20,000 | See also - volume mappings per host object below |
Mirrored Volume (Virtual Disk) Properties | ||
Copies per volume | 2 | |
Volume copies per system | 8192 | |
Total mirrored volume capacity per I/O group | 1024 TB | |
Generic Host Properties | ||
Host objects (IDs) per system | 2048* | A host object may contain both Fibre Channel ports and iSCSI names |
Host objects (IDs) per I/O group | 512 | Refer to the additional Fibre Channel and iSCSI host limits below |
Volume mappings per host object | 2048 | |
Total Fibre Channel ports and iSCSI names per system | 8192 | |
Total Fibre Channel ports and iSCSI names per I/O group | 2048 | |
Total Fibre Channel ports and iSCSI names per host object | 512 | |
Fibre Channel Host Properties (including hosts attached using FCoE) | ||
Fibre Channel hosts per system | 2048 | |
Fibre Channel host ports per system | 4096 | |
Fibre Channel hosts per I/O group | 512 | |
Fibre Channel host ports per I/O group | 1024 | |
Fibre Channel hosts ports per host object (ID) | 512 | |
iSCSI Host Properties | ||
iSCSI hosts per system | 1024 | |
iSCSI hosts per I/O group | 256 | |
iSCSI names per host object (ID) | 256 | |
iSCSI names per I/O group | 256 | |
Copy Services Properties | ||
Remote Copy (Metro Mirror and Global Mirror) relationships per system | 4096 | This can be any mix of Metro Mirror and Global Mirror relationships. |
Remote Copy relationships per consistency group | - | No limit is imposed beyond the Remote Copy relationships per system limit |
Remote Copy consistency groups per system | 256 | |
Total Metro Mirror and Global Mirror volume capacity per I/O group | 1024 TB | This limit is the total capacity for all master and auxiliary volumes in the I/O group. |
Total number of Global Mirror with Change Volumes relationships per system | 256 | |
FlashCopy mappings per system | 4096 | |
FlashCopy targets per source | 256 | |
FlashCopy mappings per consistency group | 512 | |
FlashCopy consistency groups per system | 127 | |
Total FlashCopy volume capacity per I/O group | 1024 TB | |
IP Partnership Properties | ||
Inter-cluster IP partnerships per system | 1 | A system may be partnered with up to three remote systems. A maximum of one of those can be IP and the other two FC. |
I/O groups per system | 2 | The nodes from a maximum of two I/O groups per system can be used for IP partnership. |
Inter site links per IP partnership | 2 | A maximum of two inter site links can be used between two IP partnership sites. |
Ports per node | 1 | A maximum of one port per node can be used for IP partnership. |
Internal Storage Properties | ||
SAS chains per control enclosure | 2 | |
Enclosures per SAS chain | 5 see notes | Up to 5 expansion enclosures on SAS port 1 and up to 4 expansion enclosures on SAS port 2 |
Expansion enclosures per control enclosure | 9 | |
Drives per I/O group | 240 | |
Drives per system | 960 | Maximum requires a system containing four control enclosures, each with the maximum number of expansion enclosures |
Min-Max drives per enclosure | 0-12 or 0-24 | Limit depends on the enclosure model |
RAID arrays per system | 128 | |
Min-Max member drives per RAID-0 array | 1-8 | |
Min-Max member drives per RAID-1 array | 2-2 | |
Min-Max member drives per RAID-5 array | 3-16 | |
Min-Max member drives per RAID-6 array | 5-16 | |
Min-Max member drives per RAID-10 array | 2-16 | |
Hot spare drives | - | No limit is imposed |
External Storage System Properties | ||
Storage system WWNNs per system (cluster) | 1024 | |
Storage system WWPNs per system (cluster) | 1024 | |
WWNNs per storage system | 16 | |
WWPNs per WWNN | 16 | |
LUNs (managed disks) per storage system | - | No limit is imposed beyond the managed disks per system limit |
System and User Management Properties | ||
User accounts per system | 400 | Includes the default user accounts |
User groups per system | 256 | Includes the default user groups |
Authentication servers per system | 1 | |
NTP servers per system | 1 | |
iSNS servers per system | 1 | |
Concurrent open SSH sessions per system | 10 | |
Event Notification Properties | ||
SNMP servers per system | 6 | |
Syslog servers per system | 6 | |
Email (SMTP) servers per system | 6 | Email servers are used in turn until the email is successfully sent |
Email users (recipients) per system | 12 | |
LDAP servers per system | 6 |
Extents
The following table compares the maximum volume, mdisk and system capacity for each extent size.
Extent size (MB) | Maximum non thin-provisioned volume capacity in GB | Maximum thin-provisioned volume capacity in GB | Maximum MDisk capacity in GB | Total storage capacity manageable per system* |
16 | 2048 (2 TB) | 2000 | 2048 (2 TB) | 64 TB |
32 | 4096 (4 TB) | 4000 | 4096 (4 TB) | 128 TB |
64 | 8192 (8 TB) | 8000 | 8192 (8 TB) | 256 TB |
128 | 16,384 (16 TB) | 16,000 | 16,384 (16 TB) | 512 TB |
256 | 32,768 (32 TB) | 32,000 | 32,768 (32 TB) | 1 PB |
512 | 65,536 (64 TB) | 65,000 | 65,536 (64 TB) | 2 PB |
1024 | 131,072 (128 TB) | 130,000 | 131,072 (128 TB) | 4 PB |
2048 | 262,144 (256 TB) | 260,000 | 262,144 (256 TB) | 8 PB |
4096 | 262,144 (256 TB) | 262,144 | 524,288 (512 TB) | 16 PB |
8192 | 262,144 (256 TB) | 262,144 | 1,048,576 (1024 TB) | 32 PB |
Configuration limits for Storwize V7000 software version 7.1
Configuration limits for Storwize V7000 software version 6.4
Restrictions
- Configuration Restrictions
IBM Storwize V7000 Unified does not support using Windows NTP servers to provide network time synchronisation. For further information, please refer to the Microsoft article for more details: http://technet.microsoft.com/en-us/library/cc773013%28v=ws.10%29.aspx
Function Restrictions
- IBM Real-time Compression for file workloads
IBM Real-time Compression is supported on Storwize V7000 Unified R1.4 systems for block workloads. If you are interested in Real-time Compression for file workloads please send an email to newdisk@us.ibm.com. An IBM representative will contact you.
Storwize V7000 Unified systems running V1.4.0.X, and utilizing compressed volumes are not allowed to upgrade to Storwize V7000 Unified V1.4.1.X
Hierarchical Storage Management (HSM)
Once the file system in Storwize V7000 Unified system is HSM enabled, then replacing/exchanging TSM server should be done carefully to avoid any impact to the Storwize V7000 Unified filesystem(s). Additional information is available in the following technote: Guidance when Changing TSM Server for HSM Enabled File Systems
Asynchronous Replication behaviour with filesets
File set definitions and associated information such as quotas are not replicated to the target system. The directory structure of the source file set, all files and extended attribute information contained within are replicated to the target, though they will be handled as a normal directory on the target.
When a file set within the source file system is unlinked, it is still held by the source file system but it disappears from the source directory tree. If a replication process is run during the time when a source file set is unlinked, the file tree will appear as if those files have been removed and they will therefore be removed from the target system, as to match the current state of the source. This may cause a significant amount of data to be removed from the target system and would result in this data being unavailable in the event of a disaster at the source location.
Upon re-linking the file set on the source, the file tree will appear again and the next replication cycle will behave as though the entire file tree was just created, resulting in those files being replicated to the target. This may cause a significant amount of data to be resent to the target to bring it back into synchronization with the source system.
Refer to the Storwize V7000 Unified Information Center for additional information on managing asynchronous replication, including authentication and networking requirements.
Asynchronous Replication across Code Levels
Asynchronous replication is not supported across different code levels. The source machine and the target machine should have the same code level.
Home Cache Setup with HSM
Due to the known issue of potential data read corruption observed at remote cache, it is recommended to not to configure Tivoli Hierarchical Storage Management for home cache files.
Storwize V7000 Unified Software Upgrade
Storwize V7000 Unified software upgrades can be performed concurrently with the following methods of host access:
- Block I/O access:
- Fibre Channel
- iSCSI
- File I/O access:
- NFS (using hard mounts)
Host systems using CIFS, HTTPS, SCP or FTP for file I/O access may experience brief interruptions during the upgrade process and subsequently need to reconnect after the upgrade has completed. It is recommended that any applications accessing file shares using these protocols be quiesced before starting a software upgrade.
Storwize V7000 Block Function Restrictions for File System Volumes
The table below shows which V7000 block functions can be used for file system volumes:
V7000 block functions | Supported usage |
Shared storage pools for block and file volumes | Yes |
Easy Tier, when using solid-state drives (SSDs) | Yes |
External storage virtualization | Yes. Refer to CIFS Planning Guidance for IBM Storwize V7000 Unified for additional restrictions. |
Remote Copy (Metro Mirror, Global Mirror) | No; use V7000 Unified asynchronous file replication for file workloads |
FlashCopy | No; use GPFS snapshots for file workloads |
Volume mirroring | No; use GPFS file system replication for file workloads |
Thin provisioning (space-efficient volumes) | No; use GPFS file placement and migration features for file workloads |
Volume expand/shrink | No; use GPFS to expand/shrink file system capacity |
Volume migration between storage pools | No; use GPFS functionality to manage data placement Note: volume migration within a storage pool, e.g. as a result of removing an mdisk or array, is supported. |
Image-mode volumes | No |
DS4000 Maintenance
Storwize V7000 Unified supports concurrent ESM firmware upgrades for those DS4000 models listed as such on the Storwize V7000 Supported Hardware List when they are running either 06.23.05.00 or later controller firmware. However, controllers running firmware levels earlier than 06.23.05.00 will not be supported for concurrent ESM upgrades. Customers in this situation, who wish to gain support for concurrent ESM upgrades, will need to first upgrade the DS4000 controller firmware level to 06.23.05.00. This action is a controller firmware upgrade, not an ESM upgrade and concurrent controller firmware upgrades are already supported in conjunction with Storwize V7000. Once the controller firmware is at 06.23.05.00 or later the ESM firmware can be upgraded concurrently.
Note: The DS4000 ESM firmware upgrade must be done on one disk expansion enclosure at a time. A 10 minute delay from when one enclosure is upgraded to the start of the upgrade of another enclosure is required. Confirm via the Storage Manager applications "Recovery Guru" that the DS4000 status is in an optimal state before upgrading the next enclosure. If it is not, then do not continue ESM firmware upgrades until the problem is resolved.
Host Limitations for File I/O Access
CIFS access to file systems is supported only for volumes that are placed on Storwize V7000 internal storage.
Host Limitations for Block I/O Access
Windows NTP server
The Linux NTP client used by Storwize V7000 may not always function correctly with Windows W32Time NTP Server
Windows SAN Boot Clusters (MSCS):
It is possible to SAN Boot a Microsoft Cluster subject to the following restrictions imposed by Microsoft:
- On Windows 2003, clustered disks and boot disks can be presented on the same storage bus, but ONLY if the Storport driver is being used.
These restrictions and more are described in the Microsoft White Paper: "Microsoft Windows Clustering: Storage Area Networks".
We have not tested, and therefore do not support, modifying the registry key as suggested on page 28 (which would allow boot disks and clustered disks on the same storage bus on Windows 2003 without the Storport driver).
Oracle
Oracle Version and OS | Restrictions that apply: |
Oracle RAC 10g on Windows: | 1 |
Oracle RAC 10g on AIX: | 1, 2 |
Oracle RAC 11g on AIX: | 2 |
Oracle RAC 10g on HP-UX11.31: | 1, 2 |
Oracle RAC 11g on HP-UX11.31: | 1, 2 |
Oracle RAC 10g on HP-UX11.23: | 1, 2 |
Oracle RAC 11g on HP-UX11.23: | 1, 2 |
Oracle RAC 10g on Linux Host: | 1, 3 |
Restriction 1: ASM cannot recognize the size change of the disk when Storwize V7000 disk is resized unless the disk is removed from ASM and included again.
Restriction 2: After an ASM disk group has successfully dropped a disk, the disk cannot be deleted from the OS. The workaround to the OS restriction is to bring down the ASM instance, delete the disk from the OS, and bring up the ASM instance again.
Restriction 3: For RHEL4 set Oracle Clusterware 'misscount' parameter to a bigger one to allow SDD to do path failover first. The default miscount setting 60s is too short for SDD. We recommend to set it 90s or 120s. Command to use: crsctl set css misscount 90
Related Information
[{"Product":{"code":"ST5Q4U","label":"IBM Storwize V7000 Unified (2073-700)"},"Business Unit":{"code":"BU058","label":"IBM Infrastructure w\/TPS"},"Component":"1.4","Platform":[{"code":"PF025","label":"Platform Independent"}],"Version":"1.4","Edition":"","Line of Business":{"code":"","label":""}}]
Was this topic helpful?
Document Information
Modified date:
17 June 2018
UID
ssg1S1004227