IBM®
FlashSystem 7300 systems use NVMe-attached
drives in the control enclosures to provide significant performance improvements as compared to
SAS-attached flash drives. The system also supports 2U and 5U SAS-attached expansion
enclosure options.
The FlashSystem 7300 control enclosure has
two models: 4657-924 and 4657-U7D. The FlashSystem 7300 is a 2U dual controller that contains up to 24 NVMe-attached IBM FlashCore® Modules or other self encrypting NVMe-attached SSDs or Storage Class Module drives. The drives are accessible from the front of the control enclosure,
as shown in Figure 1.Figure 1. Front view of the control enclosure
Each control enclosure contains two identical node canisters. As Figure 2 shows, the top node canister is inverted above the
bottom one; each node canister is bounded on each side by a power supply unit.Figure 2. Rear view of the control enclosure
Each FlashSystem 7300 control enclosure has
the following characteristics and features:
IBM Spectrum
Virtualize software with enclosure-based, all inclusive software feature licensing
Three-year warranty. Customer-installed and maintained with one FRU (system board) replacement
support by IBM Service Support Representatives (SSRs).
Optional, priced service offerings are also available.
Each of two node canisters processors is dual socket that contains 10-core of 2.4
GHz Intel Cascade Lake CPU SKX processor. Therefore,
20 cores per canister or 40
cores per enclosure.
Dual boot drive
Hardware compression assists of 40 Gb/s per canister.
Six channels of cache per CPU with 1 - 24 DIMMs, supporting 128 GB - 768 GB, which
is 256 GB - 1.5 TB per control enclosure.
NVMe transport protocol for high performance of 2.5-inch (SFF) NVMe-attached flash drives
supports the following.
Support for self-compressing, self-encrypting 2.5-inch NVMe PCIe Gen 4 NVMe IBM FlashCore Modules (FCM3) with the following storage
capacities: 4.8 TB, 9.6 TB, 19.2 TB, and 38.4 TB.
Support for industry-standard 2.5-inch NVMe-attached SSD drive options with 4.8 TB, 9.6 TB, 19.2 TB, and 38.4 TB IBM flash drive storage capacities.
Support of 2.5- inch NVMe SCM drive with 1.6 TB of storage capacity.
Onboard ports per node canister:
Four 10 Gb Ethernet ports
Two USB Ports
One 1 Gb Ethernet technician port
Using an optional Fibre Channel (FC) adapter, you can add up to four control enclosures of a FlashSystem 7300 for a scale out clustered
system.
Three PCIe slots per node that supports four Fibre Channel ports or two Ethernet
ports. The 12 Gbps SAS adapter is used for SAS expansion and is only supported in slot 3. With
this adapter, the control enclosure can have 2 SAS chains that attach to the following expansion
enclosures:
Support for 2.5-inch 12 Gbps SAS industry-standard flash drives in SAS expansion enclosures,
with the following capacities: 1.92 TB, 3.84 TB, 7.68 TB, and 15.36 TB.
Supports an intermix of FlashSystem 7300five 2U or two 5U expansion
enclosures with each chain can have maximum weight of five.
Note: For systems on version 8.5.0.5
or higher, on each SAS chain, the systems can support up to a SAS chain weight of six.
The system supports the maximum expansion up to 1568 drives that includes both the SAS and NVMe drives.
Note: For systems on version 8.5.0.5 or higher, the systems supports the maximum
expansion up to 1760 drives that includes both the SAS and NVMe drives.
Three PCIe slots that optionally support any combination of the following network
adapters:
Table 2. Supported network
adapters
Adapter
Note
4-port 32 Gbps Fibre Channel
Supports NVMe over FC
Note: FC adapters are required for adding control enclosures, up to a
maximum of four per system.
2-port 25 Gbps Ethernet (iWARP)
Supports iSCSI host attachment
2-port 25 Gbps Ethernet (RoCE)
Supports iSCSI host attachment
Supports NVMe over RDMA
2-ports 100 Gbps Ethernet (RoCEV2)
Supports iSCSI host attachment
Supports NVMe over RDMA
Note:
In 100 Gbps Ethernet ports iSCSI performance is equivalent to 25 Gbps iSCSI host
attachment.
The 100 Gbps Ethernet adapter is limited to PCIe Gen3x16 Bandwidth (128 Gbps) on this hardware.
Performance should be calculated to use on a primary or failover model to avoid port
oversubscription, even using NVMe over RDMA.
NVMe transport protocol
FlashSystem 7300 systems use the Non-Volatile
Memory express (NVMe) drive transport protocol.
FlashSystem 7300 supports the following transport protocols for host
attachments: NVMe over Fibre Channel and NVMe over RDMA.
NVMe is designed specifically for flash technologies. It is a faster, less complicated storage
drive transport protocol than SAS.
NVMe-attached drives support multiple queues so that each CPU core can communicate directly with
the drive. This support avoids the latency and reduces core-core communication to give the best
performance.
NVMe offers better performance and lower latencies exclusively for solid-state drives through
multiple I/O queues and other enhancements.
In addition to supporting self-compressing, self-encrypting IBM FlashCore Modules, the NVMe transport protocol supports other industry
standard NVMe flash drives.
Supported distributed RAIDs
FlashSystem 7300 uses one of the
following distributed RAID levels for best resiliency, depending on how many member drives
are in the storage system:
Distributed RAID 6, for systems with a minimum of six and up to (including expansion) 128 SAS
drives. Also, if there are more than six member drives, DRAID-6 is recommended by default.
Distributed RAID 6 support up to 24 NVMe drives.
Distributed RAID 1 is supported by the following configuration:
Distributed RAID 1, for systems with maximum of six or less member drives and three HDDs with
one rebuild area.
Distributed RAID 1, for systems with maximum of six or less member drives and HDDs in range of
three to six with one rebuild area with less than 8 TiB physical capacity
Distributed RAID 1, for systems with maximum of six or less member drives and two SSDs with no
rebuild area with less than 20 TiB physical capacity.
IBM FlashCore Modules are NVMe-attached
drives
IBM FlashCore Modules have built-in performance
neutral hardware compression and encryption.
Up to 24 IBM FlashCore Modules in the FlashSystem 7300 control enclosures are available in
4.8 TB, 9.6 TB, 19.2 TB, and 38.4 TB NVMe-attached flash drives with IBM FlashCore Technology that offer up to 3:1 self-compression and
self-encryption.
The 24 FlashCore Modules in the 38.4 TB NVMe-attached Flash Drives give a maximum per control
enclosure that has 921 TB of usable capacity and 2764 TB of effective capacity.
An intermix of IBM FlashCore Module
NVMe-attached flash drives of different sizes can be used in a control enclosure.
IBM
Spectrum Virtualize
software
A FlashSystem 7300 control enclosure consists
of two node canisters that each run IBM Spectrum Virtualize software, which is part of the IBM
Spectrum Storage family. IBM Spectrum Virtualize
software provides the following functions for the host systems that attach to the system:
A single pool of storage
Logical unit virtualization
Management of logical volumes
Mirroring of logical volumes
The system also provides the following functions:
Large scalable cache
Copy Services:
IBM
FlashCopy® (point-in-time copy) function,
including thin-provisioned FlashCopy to make
multiple targets affordable
IBM
HyperSwap® (active-active copy) function
Metro Mirror
(synchronous copy)
Global Mirror
(asynchronous copy)
Data migration
Space management:
IBM Easy Tier® function to
migrate the most frequently used data to higher-performance storage
Metering of service quality when combined with IBM Spectrum® Connect. For information,
refer to the IBM Spectrum Connect
documentation.
Thin-provisioned logical volumes
Compressed volumes to consolidate storage using data reduction
pools
Data Reduction pools with deduplication
System hardware
The storage system consists of a set
of drive enclosures. Control enclosures contain NVMe flash drives and a pair of
node canisters. A collection of control enclosures that are managed as a single system
is called a clustered system or a
system. Expansion enclosures contain SAS drives and are attached to
control enclosures. Expansion canisters include the serial-attached SCSI (SAS)
interface hardware that enables the node canisters to use the SAS flash drives of the expansion
enclosures.
Figure 3 shows the system as a storage system. The internal drives are
configured into arrays and volumes are created from those arrays.
Figure 3. System as a storage system
The system can also be used to virtualize other storage systems, as shown in Figure 4.
Figure 4. System shown virtualizing other storage system
The two node canisters in each control enclosure are arranged into pairs that are known as
I/O groups. A single pair is responsible for serving I/O on a specific volume. Because
a volume is served by two node canisters, the volume continues to be available if one node canister
fails or is taken offline. The Asymmetric Logical Unit Access (ALUA) features of SCSI disable the
I/O for a node before it is taken offline or when a volume cannot be accessed through that node.
A system that does not contain any internal drives can be used as a storage virtualization
solution.
System topology
The system topology can be set up in following ways.
Standard topology, where all node canisters in the system are at the same site. Figure 5. Example of a standard system topology
System management
The nodes in a system operate as a single system and present a single point of control for system
management and service. System management and error reporting are provided through an Ethernet
interface to one of the nodes in the system, which is called the configuration node.
The configuration node runs a web server and provides a command-line interface (CLI). The
configuration node is a role that any node can take. If the current configuration node fails, a new
configuration node is selected from the remaining nodes. Each node also provides a command-line
interface and web interface to enable some hardware service actions.
Fabric types
I/O operations between hosts and nodes and between nodes and RAID storage systems use the SCSI
standard. The nodes communicate with each other by using private SCSI commands. Table 3 shows the fabric types that can be used
for communicating between hosts, nodes, and RAID storage systems. These fabric types can be used at
the same time.
Table 3. Communications types
Communications type
Host to node
Node to storage system
Node to node
Fibre Channel SAN
Yes
Yes
Yes
iSCSI
10 Gbps Ethernet
25 Gbps Ethernet
Yes
Yes
No
iSCSI
100 Gbps Ethernet
Yes
No
No
iSER (iWARP)
25 Gbps Ethernet
No
No
Yes
RDMA-capable Ethernet ports for node-to-node communication (25 Gbps
Ethernet)
No
No
Yes
NVMe over Fibre Channel
Yes
No
No
NVMe over RDMA (RoCE)
25 Gbps Ethernet
100 Gbps Ethernet
Yes
No
No
Note:
The 100 Gbps adapter supports iSCSI. However, the performance is limited 25 Gbps per node.
The 25 Gbps Ethernet port iSER host attachment is only supported for node to node clustering
under RPQ.