Each FlashSystem 5000 system is
a virtualizing RAID storage
system. FlashSystem 5010/H and FlashSystem 5030/H system models are
available.
IBM
Spectrum Virtualize software
IBM®FlashSystem 5000
systems are built with IBM Spectrum
Virtualize
software
, which is part of the
IBM Spectrum Storage™
family.
IBM Spectrum Virtualize is a
key member of the IBM Spectrum Storage
portfolio. It is a highly flexible storage solution that enables rapid deployment of block storage
services for new and traditional workloads, on-premises, off-premises and in a combination of both.
Designed to help enable cloud environments, it is based on the proven technology. For more
information about the IBM Spectrum Storage
portfolio, see the following
website.
The software
provides these functions
for the host systems that attach to the system:
Creates a single pool of storage
Provides logical unit virtualization
Manages logical volumes
System hardware
Each FlashSystem 5000storage
system consists of a set of
drive enclosures. The control enclosure contains disk drives and two node canisters.
Expansion enclosures contain disk drives and two expansion canisters.
Figure 1 shows the front view of the of the
large form factor (LFF) and small form factor (SFF) FlashSystem 5000 systems.
Table 1 summarizes the features and
machine type and model (MTM) of the FlashSystem 5000 control enclosures.
Table 1. Control enclosure models
System model
MTM
Drives
Features
IBM
FlashSystem 5010/H
2072-2H2
Up to 12 (LFF) 3.5-inch drives.
Two node canisters, each with a 2-core, HyperThreaded processor and 8 GB memory per
canister.
Node canister memory can be expanded to 16 GB or 32 GB, for a maximum of 64 GB per I/O
group.
Two 1 Gbps Ethernet ports per canister. The second Ethernet port 2 is also the technician port,
which is used for system setup.
One 12 Gbps SAS port, per node canister, to attach to expansion enclosures.
2072-2H4
Up to 24 (SFF) 2.5-inch drives.
IBM
FlashSystem 5030/H
2072-3H2
Up to 12 (LFF) 3.5-inch drives.
Two node canisters, each with a 6-core, 12-thread processor and 16 GB memory per canister.
Node canister memory can be expanded to 32 GB, with a maximum memory of 64 GB per I/O
group.
One 1 Gbps Ethernet technician port per node canister, which is used for system setup and node
service interface.
Two 10 Gbps Ethernet ports per canister, which are both used for iSCSI and system
management.
Two 12 Gbps SAS port per canister for expansion enclosure attachment.
2072-3H4
Up to 24 (SFF) 2.5-inch drives..
The system also provides the following functions:
Large scalable cache
Copy Services:
IBM
FlashCopy® (point-in-time copy) function,
including thin-provisioned FlashCopy to make
multiple targets affordable
IBM
HyperSwap® (active-active copy) function (for FlashSystem 5030/H systems only)
Metro Mirror
(synchronous copy)
Global Mirror
(asynchronous copy)
Data migration
Space management:
IBM
Easy Tier® function to
migrate the most frequently used data to higher-performance storage
Metering of service quality when combined with IBM Spectrum® Connect. For information,
refer to the IBM Spectrum Connect
documentation.
Thin-provisioned logical volumes
Compressed volumes to consolidate storage using data reduction
pools
Data Reduction pools with deduplication
In addition, FlashSystem 5010/H and FlashSystem 5030/H systems support the expansion
enclosures that are listed in Table 2.
Table 2. Supported expansion enclosures
Enclosure (MTM)
Description
Enclosure height
2072-12G / 2072-F12
12-slot expansion enclosure for 3.5-inch drives
2U
2072-24G / 2072-F24
24-slot expansion enclosure for 2.5-inch drives
2U
2072-92G / 2072-F92
92-slot expansion enclosure for 2.5-inch or 3.5-inch drives
5U
Figure 2 shows an example of a FlashSystem 5000 system as a traditional RAID
storage
system. The internal drives are
configured into arrays. Volumes are created from those arrays.
The two node canisters are known as an I/O group. The node canisters are responsible
for serving I/O on the volumes. Because a volume is served by both node canisters, no availability
is lost if one node canister fails or is taken offline. The Asymmetric Logical Unit Access (ALUA)
features of SCSI are used to disable the I/O for a node before it is taken offline or when a volume
cannot be accessed through that node.
System topology
The
system topology can be set up in several different ways.
Standard topology, where
all node canisters in the system are at the same site.
HyperSwap topology, where
the system consists of at least two I/O groups. Each I/O group is at a different site. Both nodes of an
I/O group are at the same site. A volume can be active on two I/O groups so that it can immediately be
accessed by the other site when a site is not available. The HyperSwap
topology is supported on FlashSystem 5030/H
systems.
Volumes types
You can create the following types of volumes on the system.
Basic volumes, where a single copy of the volume is cached in one I/O group. Basic
volumes can be established in any system topology; however, Figure 5 shows a standard system topology.
Mirrored volumes, where copies of the volume can either be in the same storage pool
or in different storage pools. The volume is cached in a single I/O group, as Figure 6 shows. Typically, mirrored volumes are
established in a standard system topology.
HyperSwap volumes, where copies of a
single volume are in different storage pools that are on different sites. As Figure 7 shows, the volume is cached in two I/O
groups that are on different sites. These volumes can be created only on FlashSystem 5030/H systems when the system topology is
HyperSwap.
System management
Each control enclosure contains two node canisters. Together, the system nodes canisters operate
as a single system.
System management and error reporting are provided through an Ethernet interface to one of the nodes
in the system, which is called the configuration node. The configuration node runs a
web server and provides a command-line interface (CLI). The configuration node is a role that either
node can take. If the current configuration node fails, the other node becomes the configuration
node. Each node also provides a command-line interface and web interface for servicing hardware.
Fabric types
I/O operations between hosts and system nodes and between the system nodes and RAID storage systems use the SCSI standard. The
system nodes communicate with each other by using private SCSI commands.
Table 3 shows the fabric types
that can be used for communicating between hosts, nodes, and RAID storage systems. All installed fabric types
can be used at the same time.
Table 3. System communications types
Communications type
Host to system node
System node to storage
system
Fibre Channel SAN
Yes, using an optional 4-port 16 Gbps Fibre Channel host interface adapter.
Yes, using an optional Fibre Channel host interface adapter
iSCSI (10 Gbps Ethernet)
Yes
FlashSystem 5010/H systems must have an
optional 4-port 10 Gbps Ethernet host interface adapter installed.
FlashSystem 5030/H systems have two onboard 10
Gbps Ethernet ports that can be used for host attachment.
The optional 4-port 10 Gbps Ethernet
host interface adapter can also be
installed.
Not supported
iSCSI (25 Gbps Ethernet)
Yes, using an optional 2-port 25 Gbps host interface adapter
Not supported
Serial-attached SCSI (SAS)
Yes, using an optional 4-port 12 Gbps SAS host interface adapter.
The
system supports a one-time migration of external storage data to the system. In the management GUI,
select Pools > Storage Migration > New Migration to start the storage migration wizard.