Storwize V5000 overview

Each IBM® Storwize® V5000 system is a virtualizing RAID storage system.

Storwize V5000 software

IBM Storwize V5000 is built with IBM Spectrum Virtualize™ software, which is part of the IBM Spectrum Storage™ family.

The system also provides these functions.
  • Large scalable cache
  • Copy Services
    • IBM FlashCopy® (point-in-time copy) function, including thin-provisioned FlashCopy to make multiple targets affordable
    • IBM HyperSwap® (active-active copy) function
    • Metro Mirror (synchronous copy)
    • Global Mirror (asynchronous copy)
    • Data migration
  • Space management
    • IBM Easy Tier® function to migrate the most frequently used data to higher-performance storage
    • Metering of service quality when combined with IBM Spectrum Control Base Edition. For information, refer to the IBM Spectrum Control Base Edition documentation.
    • Thin-provisioned logical volumes

Storwize V5000 hardware

The Storwize V5000 storage system consists of a set of drive enclosures. The control enclosure contains disk drives and two node canisters. Expansion enclosures contain disk drives and two expansion canisters.

Figure 1 shows a Storwize V5000 system as a traditional RAID storage system. The internal drives are configured into arrays. Volumes are created from those arrays.

Figure 1. Storwize V5000 system as a RAID storage system
This figure shows an overview of a RAID storage system.

The two node canisters are known as an I/O group. The node canisters are responsible for serving I/O on the volumes. Because a volume is served by both node canisters, there is no loss of availability if one node canister fails or is taken offline. The Asymmetric Logical Unit Access (ALUA) features of SCSI are used to disable the I/O for a node before it is taken offline or when a volume cannot be accessed via that node.

Volumes types

You can create the following types of volumes on the system.
  • Basic volumes, where a single copy of the volume is cached in one I/O group. Basic volumes can be established in any system topology; however, Figure 2 shows a standard system topology.
    Figure 2. Example of a basic volume
    This figure shows an example of a basic volume in a standard system configuration.
  • Mirrored volumes, where copies of the volume can either be in the same storage pool or in different storage pools. The volume is cached in a single I/O group. Typically, mirrored volumes are established in a standard system topology.
    Figure 3. Example of mirrored volumes
    This figure shows an example of a mirrored volume in a standard system configuration.

System topology

The Storwize V5000 node canisters can be arranged in different topologies.
Note: You cannot mix I/O groups of different topologies in the same system.
  • Standard topology, where all node canisters in the system are at the same site.
    Figure 4. Example of a standard system topology
    This figure shows an example of a standard system topology
  • HyperSwap topology, where the system consists of at least two I/O groups. Each I/O group is at a different site. Both nodes of an I/O group are at the same site. A volume can be active on two I/O groups so that it can immediately be accessed by the other site when a site is not available.
    Figure 5. Example of a HyperSwap system topology
    This figure shows an example of a HyperSwap system topology

System management

The Storwize V5000 nodes operate as a single system and present a single point of control for system management and service. System management and error reporting are provided through an Ethernet interface to one of the nodes in the system, which is called the configuration node. The configuration node runs a web server and provides a command-line interface (CLI). The configuration node is a role that either node can take. If the current configuration node fails, the other node becomes the configuration node. Each node also provides a command-line interface and web interface for performing hardware service actions.

Fabric types

I/O operations between hosts and Storwize V5000 nodes and between Storwize V5000 nodes and RAID storage systems are performed by using the SCSI standard. The Storwize V5000 nodes communicate with each other by using private SCSI commands.

Fibre Channel and Fibre Channel over Ethernet (FCoE) connectivity is supported on Storwize V5000 with the optional FCoE feature installed.

Table 1 shows the fabric types that can be used for communicating between hosts, nodes, and RAID storage systems. All installed fabric types can be used at the same time.

Table 1. Storwize V5000 communications types
Communications type Host to Storwize V5000 Storwize V5000 to storage system
Fibre Channel SAN Yes, with Fibre Channel host interface adapter Yes, with Fibre Channel host interface adapter
iSCSI (1 Gbps Ethernet) Yes No
iSCSI (10 Gbps Ethernet) Yes
Note: Storwize V5000 systems must have the optional 10 Gbps Ethernet host interface adapter to use Ethernet for host attachment.
No
Fibre Channel over Ethernet Yes, with 10 Gbps Ethernet host interface adapter Yes, with 10 Gbps Ethernet host interface adapter
Serial-attached SCSI (SAS) Yes The system does not support SAS-attached storage systems; however, the system does support a one-time migration of external storage data to the system. In the management GUI, select Pools > Storage Migration > New Migration to launch the storage migration wizard.