Introducing the Elastic Storage Server for Power

This section describes the components of the IBM Elastic Storage Server (ESS) for Power®.

The Elastic Storage Server is a high-performance, GPFS™ network shared disk (NSD) solution that is made up of one or more building blocks. A building block is a pair of servers with shared disk enclosures attached. See Building-block configurations for more information.

An Elastic Storage Server for Power system is available in these models:
  • 5146-GL2
  • 5146-GL4
  • 5146-GL6
  • 5146-GS1
  • 5146-GS2
  • 5146-GS4
  • 5146-GS6
Throughout this document, these models are referred to as: GL2, GL4, GL6, GS1, GS2, GS4, and GS6.

GL2 and GL4 systems must be installed in a rack with a front door, rear door, and side panels for electromagnetic interference (EMI) compliance.

An ESS system consists of the following components:
  • IBM® Power System S822L servers: 8247-22L (default) or 8284-22A (alternative).

    These servers are called I/O server nodes. Two I/O server nodes are required for each building block.

  • An IBM Power System S821L server for xCAT (8247-21L).

    This server is called the management server. An xCAT server is required to discover the I/O server nodes (working with the HMC), provision the operating system (OS) on the I/O server nodes, and deploy the ESS software on the management node and I/O server nodes. One management server is required for each ESS system composed of one or more building blocks.

    You need a management server as part of your ESS system. Typically, the management server is ordered with the initial building block (though you can use an existing customer system). Additional building blocks ordered that are to be added to an existing building block do not require an additional management server. A single management server can support multiple building blocks in the same GPFS cluster.

    Typically, the ESS GUI is installed on the management server. The GUI uses the management server to acccess hardware-related information about the I/O server nodes. The management server also serves as a third GPFS quorum node in a configuration with one building block.

  • One or more client nodes of various supported IBM Spectrum Scale™ operating systems and architectures.
  • An IBM 7042-CR8 Rack-mounted Hardware Management Console (HMC).

    An HMC is required to manage the hardware. The HMC manages such POWER8® I/O resources as processor, memory, and I/O slots. It also provides access to a console window.

    The management server works closely with the HMC to discover hardware, provide a hardware inventory, and manage such hardware-related tasks as rebooting and power-cycling of the nodes.

    An HMC is optionally included in your order. If an HMC is not ordered with ESS, you will need to provide an HMC.

  • Storage interface: three LSI 9206-16e Quad-port 6Gbps SAS adapters (A3F2) per I/O server node.
  • I/O networking options:
    • 2-port Dual 10G Mellanox ConnectX-2 adapter (EC27/EC29)
    • 2-port Dual 40G Mellanox ConnectX-3 adapter (EC3A)
    • 2-port Dual FDR Mellanox ConnectX3 Pro adapter
    • 2-port Mellanox MT27600 Connect-IB adapter (up to three per server).
  • Supported I/O adapter configurations (up to three per server):

    (3 x SAS) + any combination of three of the following adapters:

    InfiniBand (EL3D), 10 GbE (EL27/EL2Z/EL3X/EL40), 40 GbE (EC3A).

  • MPT SAS SCSI controller cards: SAS2308 PCI-Express (three per server).
  • RAID controllers: IBM PCI-E IPR SAS Adapter. One IPR adapter is installed per server. This adapter provides RAID 10 capability for the OS boot drive. The management server and all I/O server nodes are configured with a RAID 10 boot drive.
  • Switches:
    ESS is compatible with industry-standard InfiniBand and Ethernet switches. The following switches can be ordered along with your ESS order.
    • One or more 1 Gigabit Ethernet (GbE) switches or virtual local-area networks (VLANs) providing two isolated subnets: IBM RackSwitch G7028 (7120-24L) or IBM RackSwitch G8052 (7120-48E).

      These networks are used for the xCAT network and the service network. The xCAT network is required for the management server to communicate with the HMC and target I/O server nodes for installation and management. The service network is required by the HMC to communicate with the I/O server nodes and the management server's flexible service processor (FSP).

    • A high-speed 10GbE or 40GbE switch for the cluster network: IBM 10/40 GbE RackSwitch G8264 (7120-64C).
    • A high-speed InfiniBand switch.
  • Rack console: IBM 7316-TF4
  • Enterprise rack: IBM 7014 Rack Model T42 (7014-T42)
  • Building block rack: IBM 7042 Rack Model T42 (7042-T42)
  • 4 to 12 SAS cables for attaching I/O server nodes to storage enclosures.
  • 8 to 24 SAS cables per ESS building block.
  • One to six DCS3700 JBOD 60-drive enclosures or EXP24S JBOD 24-drive enclosures:
    • DCS3700 disk enclosures (1818-80E, 60 drive slots)
      • GL2: (58 x 2 2TB 7.2K NL-SAS HDDs) + (2 x 400GB SSDs)
      • GL2: (58 x 2 4TB 7.2K NL-SAS HDDs) + (2 x 400GB SSDs)
      • GL2: (58 x 2 6TB 7.2K NL-SAS HDDs) + (2 x 400GB SSDs)
      • GL4: (58 x 4 2TB 7.2K NL-SAS HDDs) + (2 x 400GB SSDs)
      • GL4: (58 x 4 4TB 7.2K NL-SAS HDDs) + (2 x 400GB SSDs)
      • GL4: (58 x 4 6TB 7.2K NL-SAS HDDs) + (2 x 400GB SSDs)
      • GL6: (58 x 6 2TB 7.2K NL-SAS HDDs) + (2 x 400GB SSDs)
      • GL6: (58 x 6 4TB 7.2K NL-SAS HDDs) + (2 x 400GB SSDs)
      • GL6: (58 x 6 6TB 7.2K NL-SAS HDDs) + (2 x 400GB SSDs)
    • IBM Power Systems™ EXP24S I/O Drawers (FC 5887, 24 drive slots)
      • GS1: (24 x 400GB 2.5-inch SSDs)
      • GS1: (24 x 800GB 2.5-inch SSDs)
      • GS2: (48 x 400GB 2.5-inch SSDs)
      • GS2: (48 x 800GB 2.5-inch SSDs)
      • GS2: (46 x 1.2TB 10K SAS 2.5-inch HDDs) + (2 x 200GB 2.5-inch SSDs)
      • GS4: (96 x 400GB 2.5-inch SSDs)
      • GS4: (96 x 800GB 2.5-inch SSDs)
      • GS4: (94 x 1.2TB 10K SAS 2.5-inch HDDs) + (2 x 200GB 2.5-inch SSDs)
      • GS6: (142 x 1.2TB 10K SAS 2.5-inch HDDs) + (2 x 400GB 2.5-inch SSDs)

    The available space per disk varies, depending on the disk size. For example: 4TB disk size = 3.63 TiB available space.

    The type and number of enclosures supported depends on the model. The type and storage of individual disks also depends on the model. See The ESS storage enclosures for more information.

  • Operating system: Red Hat Enterprise Linux 7.1 (installed on the management server and the I/O server nodes)
  • Storage management software: Advanced Edition or Standard Edition of IBM Spectrum Scale Start of change 4.2.0, with the most current fixes (see the release notes for the fix levels). End of change Includes IBM Spectrum Scale RAID. See IBM Spectrum Scale RAID: Administration for more information.