RAID properties

A Redundant Array of Independent Disks (RAID) is a method of configuring drives for high availability and high performance.

RAID is an ordered collection, or group, of physical devices (disk drive or flash drive modules) that are used to define logical volumes or devices. An array is a type of MDisk that is made up of disk drives. These drives are members of the array. Each array has a RAID level. RAID levels provide different degrees of redundancy and performance, and they have different restrictions on the number of members in the array.

Storwize® V7000 Unified supports hot-spare drives. When a RAID member drive fails, the system automatically replaces the failed member with a hot-spare drive and resynchronizes the array to restore its redundancy.

Figure 1 shows the relationships of the RAID components on the system.

Figure 1. RAID objects
Shows all the IBM Storwize V7000 objects

Supported RAID levels are RAID 0, RAID 1, RAID 5, RAID 6, or RAID 10.

RAID 0
RAID 0 arrays have no redundancy and do not support hot-spare takeover.
RAID 1
RAID 1 provides disk mirroring, which duplicates data between two drives. A RAID 1 array is internally identical to a two-member RAID 10 array.
RAID 5
RAID 5 arrays stripe data over the member drives with one parity strip on every stripe. RAID 5 arrays have single redundancy with higher thin-provisioning than RAID 10 arrays, but with some performance penalty. RAID 5 arrays can tolerate no more than one member drive failure.
RAID 6
RAID 6 arrays stripe data over the member drives with two parity strips on every stripe. A RAID 6 array can tolerate any two concurrent member drive failures.
RAID 10
RAID 10 arrays stripe data over mirrored pairs of drives. RAID 10 arrays have single redundancy. The mirrored pairs rebuild independently. One member out of every pair can be rebuilding or missing at the same time. RAID 10 combines the features of RAID 0 and RAID 1.

Table 1 compares the characteristics of the RAID levels.

Table 1. RAID level comparison
Level Drive count (DC)1 Approximate array capacity Redundancy2
RAID 0 1 - 8 DC * DS3 None
RAID 1 2 DS 1
RAID 5 3 - 16 (DC - 1) * DS 1
RAID 6 5 - 16 Less than (DC - 2) * DS 2
RAID 10 2 - 16, evens (DC/2) * DS 14
  1. In the management GUI, you cannot create arrays of all sizes because the size depends on how the drives are configured.
  2. Redundancy means how many drive failures the array can tolerate. In some circumstances, an array can tolerate more than one drive failure. For more details, see "Drive failures and redundancy."
  3. DS means drive size.
  4. Between 1 and MC/2.

Array initialization

When an array is created, the array members are synchronized with each other by a background initialization process. The array is available for I/O during this process: Initialization has no impact on availability due to member drive failures.

Drive failures and redundancy

If an array has the necessary redundancy, a drive is removed from the array if it fails or access to it is lost. If a suitable spare drive is available, it is taken into the array, and the drive then starts to synchronize.

Each array has a set of goals that describe the preferred location and performance of each array member. If a drive fails, a sequence of drive failures and hot-spare takeovers can leave an array unbalanced, that is with members that do not match these goals. When appropriate drives are available, the system automatically rebalances such arrays.

Rebalancing is achieved by using concurrent exchange, which migrates data between drives without impacting redundancy.

You can manually start an exchange, and the array goals can also be updated to facilitate configuration changes.

RAID configuration guidelines

RAID can be configured through the System Setup wizard when you first install your system, or later through the Configure Internal Storage wizard. You can either use the recommended configuration, which is the fully automatic configuration, or you can set up a different configuration.

If you use the GUI-recommended storage configuration, all available drives are configured based on recommended values for the RAID level and drive class. The recommended configuration uses all the drives to build arrays that are protected with the appropriate amount of spare drives and takes advantage of IBM® Easy Tier®, where possible to create hybrid pools. The recommended configuration avoids mixing drives with different sizes and speeds in the same arrays.

The management GUI also provides a set of presets to help you configure for different RAID types. You can tune RAID configurations slightly based on best practices. The presets vary according to how the drives are configured. Selections include the drive class, the preset from the list that is shown, whether to configure spares and the number of drives to provision. If you choose to optimize for performance then the system creates RAID arrays of uniform size and can leave some drives unused. If you choose to optimize for capacity then all candidate drives are used, but some RAID arrays can be larger than others.

For greatest control and flexibility, you can use the mkarray command-line interface (CLI) command to configure RAID on your system.

If your system has a mixture of flash drives, enterprise, and nearline drives, you can use the Easy Tier function to move the most frequently used data to high performance storage.

Spare drive protection and goals

Each array member is protected by a set of spare drives that are valid matches. Some of these spare drives are more suitable than other spare drives. For example, some spare drives could degrade the array performance, availability, or both. For a given array member, a good spare drive is online and is on the same chain as the array member. A good spare drive has either one of the following characteristics:
  • An exact match of member goal capacity, performance, and location.
  • A performance match: the spare drive has a capacity that is the same or larger and has the same or better performance.
A good spare also has either of these characteristics:
  • A drive with a use of spare.
  • A concurrent-exchange old drive that is destined to become a hot-spare drive when the exchange completes.

In the CLI, the array-member attribute spare_protection is the number of good spares for that member. The array attribute spare_protection_min is the minimum of the spare protection of the members of the array.

The array attribute spare_goal is the number of good spares that are needed to protect each array member. This attribute is set when the array is created and can be changed with the charray command.

If the number of good spares that an array member is protected by is below the array spare goal, you receive event error 084300.

Slow write priority settings

When a redundant array level is doing read and write IO, the performance of the array is bound by the performance of the slowest member drive. When drives do internal ERP processes, if the SAS network is unstable or if too much work is being driven to the RAID array, then performance to member drives can be far worse than usual. In these situations, arrays that offer redundancy can use a technique of accepting a short interruption to redundancy to gain the ability to avoid writing (or reading) from the slow component. Writes that are mapped to a drive that is performing badly are committed to the other copy or parity, and then completed with good status (assuming no other failures). When the member drive recovers, the redundancy is restored by a background process of writing the strips which were marked out of sync while the member was slow.

The use of this technique is governed by the array's slow_write_priority attribute, which defaults to latency. When set to latency, the array is allowed to become out of sync in order to try to smooth poor member performance. The user is able to modify this parameter, via the charray command, to be redundancy. When set to redundancy, the array is not allowed to become out of sync but can still avoid suffering read performance loss by returning reads to the slow component from redundant paths.

When the RAID array uses latency mode or attempts to avoid reading a component in redundancy mode, the system will evaluate the drive regularly in order to assess when it becomes a reliable part of the system once again. In the case where the drive never offers good performance or causes too many performance failures in the RAID array, the system will fail the hardware to prevent ongoing exposure to the poor performing drive. The system is configured to only do this if there is no other detectable explanation for the bad performance from the drive.

Drive offline incremental rebuild

When a drive goes offline in an internal RAID array, the system will attempt to avoid performing a hot spare takeover. For a sixty second period, the drive will instead mark where new writes occurred. If the drive reappears online, it completes an "incremental rebuild" of the places where the writes occurred rather than a full component rebuild. This technique occurs regardless of the array's slow_write_priority setting because avoiding a spare takeover is desirable to maintain the highest system availability.

Drive replacement

A drive with a lit fault LED indicates that the drive has been marked as failed and is no longer in use by the system. When the system detects that such a failed drive is replaced, it reconfigures the replacement drive to be a spare and the drive that was replaced is automatically removed from the configuration. The new spare drive is then used to fulfill the array membership goals of the system.