Storwize V5000 Gen2 overview

Each IBM® Storwize® V5000 Gen2 system is a virtualizing RAID storage system. Storwize V5010, Storwize V5020, Storwize V5030, and Storwize V5030F systems are Storwize V5000 Gen2 systems.

Storwize V5000 Gen2 software

IBM Storwize V5000 Gen2 systems are built with IBM Spectrum Virtualize software , which is part of the IBM Spectrum Storage™ family.

The system also provides the following functions:
  • Large scalable cache
  • Copy Services:
    • IBM FlashCopy® (point-in-time copy) function, including thin-provisioned FlashCopy to make multiple targets affordable
    • IBM HyperSwap® (active-active copy) function (for Storwize V5030 and Storwize V5030F systems)
    • Metro Mirror (synchronous copy)
    • Global Mirror (asynchronous copy)
    • Data migration
  • Space management:
    • IBM Easy Tier® function to migrate the most frequently used data to higher-performance storage
    • Metering of service quality when combined with IBM Spectrum® Connect. For information, refer to the IBM Spectrum Connect documentation.
    • Thin-provisioned logical volumes
    • Compressed volumes to consolidate storage using data reduction pools
    • Data Reduction pools with deduplication

Storwize V5000 Gen2 hardware

The Storwize V5000 Gen2 storage system consists of a set of drive enclosures. The control enclosure contains disk drives and two node canisters. Expansion enclosures contain disk drives and two expansion canisters.

Table 1 summarizes the machine types and models of the Storwize V5000 Gen2 control enclosures.
Table 1. Storwize V5000 Gen2 control enclosures
IBM Storwize V5000 Gen2 model Warranty Machine type / model Description
IBM Storwize V5010 1 year 2077-112 12-slot Control enclosure for 3.5-inch drives
2077-124 24-slot Control enclosure for 2.5-inch drives
3 years 2078-112 12-slot Control enclosure for 3.5-inch drives
2078-124 24-slot Control enclosure for 2.5-inch drives
IBM Storwize V5020 1 year 2077-212 12-slot Control enclosure for 3.5-inch drives
2077-224 24-slot Control enclosure for 2.5-inch drives
3 years 2078-212 12-slot Control enclosure for 3.5-inch drives
2078-224 24-slot Control enclosure for 2.5-inch drives
IBM Storwize V5030 1 year 2077-312 12-slot Control enclosure for 3.5-inch drives
2077-324 24-slot Control enclosure for 2.5-inch drives
3 years 2078-312 12-slot Control enclosure for 3.5-inch drives
2078-324 24-slot Control enclosure for 2.5-inch drives
2078-U5A 24-slot Control enclosure for 2.5-inch drives (Utility mode)
IBM Storwize V5030F 1 year 2077-AF3 24-slot Control enclosure for 2.5-inch flash drives
3 years 2078-AF3
The Storwize V5000 Gen2 systems support the expansion enclosures that are listed in Table 2.
Table 2. Storwize V5000 Gen2 expansion enclosures
IBM Storwize V5000 Gen2 model Warranty Machine type / model Description
  • Storwize V5010
  • Storwize V5020
  • Storwize V5030
1 year 2077-12F 12-slot expansion enclosure for 3.5-inch drives
2077-24F 24-slot expansion enclosure for 2.5-inch drives
2077-92F 92-slot expansion enclosure for 2.5-inch or 3.5-inch drives and two secondary expander modules
3 years 2078-12F 12-slot expansion enclosure for 3.5-inch drives
2078-24F 24-slot expansion enclosure for 2.5-inch drives
2078-92F 92-slot expansion enclosure for 2.5-inch or 3.5-inch drives and two secondary expander modules
Storwize V5030F 1 year 2077-AFF 24-slot expansion enclosure for 2.5-inch flash drives
2077-A9F 92-slot expansion enclosure for flash drives and two secondary expander modules
3 years 2078-AFF 24-slot expansion enclosure for 2.5-inch flash drives
2078-A9F 92-slot expansion enclosure for flash drives and two secondary expander modules

Figure 1 shows a Storwize V5000 Gen2 system as a traditional RAID storage system. The internal drives are configured into arrays. Volumes are created from those arrays.

Figure 1. Example of a system as a RAID storage system
This figure shows an overview of a RAID storage system.

The two node canisters are known as an I/O group. The node canisters are responsible for serving I/O on the volumes. Because a volume is served by both node canisters, no availability is lost if one node canister fails or is taken offline. The Asymmetric Logical Unit Access (ALUA) features of SCSI are used to disable the I/O for a node before it is taken offline or when a volume cannot be accessed through that node.

System topology

The system topology can be set up in several different ways.
  • Standard topology, where all node canisters in the system are at the same site.
    Figure 2. Example of a standard system topology
    This figure shows an example of a standard system topology
  • HyperSwap topology, where the system consists of at least two I/O groups. Each I/O group is at a different site. Both nodes of an I/O group are at the same site. A volume can be active on two I/O groups so that it can immediately be accessed by the other site when a site is not available. The HyperSwap topology is supported on Storwize V5030 and Storwize V5030F systems.
    Figure 3. Example of a HyperSwap system topology
    This figure shows an example of a HyperSwap system topology

Volumes types

You can create the following types of volumes on the system.
  • Basic volumes, where a single copy of the volume is cached in one I/O group. Basic volumes can be established in any system topology; however, Figure 4 shows a standard system topology.
    Figure 4. Example of a basic volume
    This figure shows an example of a basic volume in a standard system configuration.
  • Mirrored volumes, where copies of the volume can either be in the same storage pool or in different storage pools. The volume is cached in a single I/O group, as Figure 5 shows. Typically, mirrored volumes are established in a standard system topology.
    Figure 5. Example of mirrored volumes
    This figure shows an example of a mirrored volume in a standard system configuration.
  • HyperSwap volumes, where copies of a single volume are in different storage pools that are on different sites. As Figure 6 shows, the volume is cached in two I/O groups that are on different sites. These volumes can be created only on Storwize V5030 and Storwize V5030F systems when the system topology is HyperSwap.
    Figure 6. Example of HyperSwap volumes
    This figure shows an example of HyperSwap volumes in a HyperSwap system configuration.

System management

The Storwize V5000 Gen2 nodes operate as a single system and present a single point of control for system management and service. System management and error reporting are provided through an Ethernet interface to one of the nodes in the system, which is called the configuration node. The configuration node runs a web server and provides a command-line interface (CLI). The configuration node is a role that either node can take. If the current configuration node fails, the other node becomes the configuration node. Each node also provides a command-line interface and web interface for servicing hardware.

Fabric types

I/O operations between hosts and Storwize V5000 Gen2 nodes and between Storwize V5000 Gen2 nodes and RAID storage systems use the SCSI standard. The Storwize V5000 Gen2 nodes communicate with each other by using private SCSI commands.

Table 3 shows the fabric types that can be used for communicating between hosts, nodes, and RAID storage systems. All installed fabric types can be used at the same time.

Table 3. Storwize V5000 Gen2 communications types
Communications type Host to Storwize V5000 Gen2 Storwize V5000 Gen2 to storage system
iSCSI (1 Gbps Ethernet) Yes No
iSCSI (10 Gbps Ethernet) Yes
Note: Storwize V5010 and Storwize V5020 systems must have the optional 10 Gbps Ethernet host interface adapter to use Ethernet for host attachment. Storwize V5030 and Storwize V5030F systems have a 10 Gbps Ethernet port that is preinstalled on the system for host attachment.
No
iSCSI (25 Gbps Ethernet) Yes No
Fibre Channel over Ethernet Yes, with 10 Gbps Ethernet host interface adapter Yes, with 10 Gbps Ethernet host interface adapter