A tour of the hardware in IBM PureApplication System

IBM® PureApplication™ System is a cloud computing system in a box, complete with both hardware and software to deploy and execute workloads in a cloud - everything you need to add a private cloud environment to an enterprise data center. This article provides an overview of the hardware included in PureApplication System, using the system console to view the various components.

Bobby Woolf, Certified Consulting IT Specialist, IBM

Photo of Bobby WoolfBobby Woolf is a Consultant for IBM Software Services for WebSphere (ISSW), focusing on IBM PureApplication System, service-oriented architecture, event-driven architecture, and application integration. He is the author of Exploring IBM SOA Technology and Practice, and a co-author of Enterprise Integration Patterns and The Design Patterns Smalltalk Companion.



October 2013 (First published 20 February 2013)

Also available in Chinese Russian

Introduction

An IBM PureApplication System (W1500 and W1700 v1.0 and v1.1) is a cloud computing system in a box, complete with both hardware and software to deploy and execute workloads in a cloud, everything you need to add a private cloud environment to an enterprise data center. This article gives an overview of the hardware included in PureApplication System, using the system console to view the various components.

This article is the first of three articles that explains the hardware and software foundation that PureApplication System provides for hosting application runtime environments:

Each article builds on its predecessor to explain this foundation fully.


Classes of PureApplication System

There are currently four classes of PureApplication System:

  • W1500 Small Rack: The W1500 in a short rack with 32, 64, 96, or 128 Intel® CPU cores.
  • W1500 Large Rack: The W1500 in a tall rack with 64, 96, 128, 160, 192, 224, 384, or 608 Intel CPU cores.
  • W1700 Small Rack: The W1700 in a short rack with 32, 64, 96, or 128 Power® CPU cores.
  • W1700 Large Rack: The W1700 in a tall rack with 64, 96, 128, 160, 192, 224, 384, or 608 Power CPU cores.

Table 1 shows a quick comparison of the hardware in those classes. The management node abbreviations in Table 1 are shown below (see the Management nodes section for details):

  • PSM: PureSystems Manager
  • VSM: Virtualization System Manager
  • FSM: PureFlex System Manager
Table 1. PureApplication System classes hardware
W1500 Small Rack W1700 Small Rack W1500 Large Rack W1700 Large Rack
Rack 25U - 1.3 M 19" 25U - 1.3 M 19" 42U - 2.0 M 19" 42U - 2.0 M 19"
Node Chassis 1 Flex Chassis 1 Flex Chassis 3 Flex Chassis 3 Flex Chassis
Processor Intel Xeon E5-2670 8 core POWER7+ 8 coreIntel Xeon E5-2670 8 core POWER7+ 8 core
Compute nodes 2, 4, 6, or 8 2, 3, or 4 4, 6, 8, 10, 12, 14, 24, or 38 2, 3, 4, 5, 6, 7, 12, or 19
CPU cores 32, 64, 96, or 128 32, 64, 96, or 128 64, 96, 128, 160, 192, 224, 384, or 608 64, 96, 128, 160, 192, 224, 384, or 608
Memory 0.5, 1.0, 1.5, or 2.0 TB RAM 0.5, 1.0, 1.5, or 2.0 TB RAM 1.0, 1.5, 2.0, 2.5, 3.0, 3.5, 6.0, or 9.5 TB RAM 1.0, 1.5, 2.0, 2.5, 3.0, 3.5, 6.0, or 9.5 TB RAM
Storage nodes 1 V7000 Controller
1 V7000 Expansion
1 V7000 Controller
1 V7000 Expansion
2 V7000 Controllers
2 V7000 Expansions
2 V7000 Controllers
2 V7000 Expansions
Storage drives 6 400 GB SSDs
40 600 GB HDDs
6 400 GB SSDs
40 600 GB HDDs
16 400 GB SSDs
80 600 GB HDDs
16 400 GB SSDs
80 600 GB HDDs
Storage capacity 2.4 TB SSD
24.0 TB HDD
2.4 TB SSD
24.0 TB HDD
6.4 TB SSD
48.0 TB HDD
6.4 TB SSD
48.0 TB HDD
Management nodes 2 PSMs
2 VSMs
2 PSMs
2 FSMs
2 PSMs
2 VSMs
2 PSMs
2 FSMs
Network 2 IBM RackSwitch 64-port 10 Gb Ethernet switches
Power 4 Power Distribution Units (PDUs)

Each model is upgradeable within its class to a maximum of W1500-128, W1700-128, W1500-608, and W1700-608, respectively. Upgrades can be performed without taking an outage.


System hardware

Let's explore the hardware in the overall PureApplication System rack for each of the system's classes.

Infrastructure map

You can see a live view of the hardware in your particular PureApplication System in the integrated console. To do so, open the Infrastructure Map. Select System Console > Hardware > Infrastructure Map, as shown in Figure 1. To access the Hardware menu, your user needs to be granted the Hardware administration role, as described in Managing administrative access in IBM PureApplication System.

Figure 1. System hardware menu
System hardware menu

The Infrastructure Map can be viewed as an interactive picture, as shown in Figure 2, and as a hierarchical tree of components. For more information, see Viewing the hardware infrastructure topic in the PureApplication System Information Center.

Figure 2. W1500-96 Infrastructure Map - graphical view
W1500-96 Infrastructure Map - graphical view

Hardware infrastructure

Similar to the Infrastructure Map graphical view shown in Figure 2, Figure 3 illustrates the layout of the hardware components in a W1500 Large Rack system rack.

Figure 3. IBM PureApplication System W1500-608 hardware
IBM PureApplication System W1500-608 hardware

As shown in the Infrastructure Map and Figure 3, a W1500 Large Rack system is a large rack (a 42U cabinet that is 2.015 m tall by 644 mm wide by 1100 mm deep, weighing 1088 kg fully loaded) that contains these main components from the top down:

  • Top of rack switches (ToRs): This is a pair of IBM System Networking RackSwitch™ G8264 64-port 10 Gb Ethernet switches.
    • To view these, go to System Console > Hardware > Network Devices. The network devices page also lists each chassis' network and SAN switches.
    • To see details about the network configuration, go to System Console > System > Customer Network Configuration.
  • Storage nodes: This is a pair of IBM Storwize® V7000 storage units, each of which is a controller node paired with an expansion node, clustered with one controller managing both units as a single SAN.
    • To view these, go to System Console > Hardware > Storage Devices.
  • Flex chassis: The system contains three IBM Flex System™ Enterprise Chassis Type 7893 chassis, each 10U high (numbered 3, 2, and 1, with 1 at the bottom). A chassis is like a docking station for compute nodes. Compute nodes fit into the chassis like drawers in a filing cabinet. When a node is inserted into a bay, corresponding connectors in the node and the bay snap together. This design facilitates replacing a compute node while the system is running.
    • To view these, go to System Console > Hardware > Flex Chassis.
  • Service laptop: A laptop computer connected to the system is stored in a 1U drawer in the rack between chassis 2 and 3. IBM uses it to administer the system.
  • Power distribution units (PDUs): The rack contains four PDUs, each of which plugs into external power separately. They in turn distribute power to the chassis' power modules, the switches, and the storage nodes.

Each Flex chassis contains several components:

  • Compute nodes: Each chassis contains fourteen compute node bays, arranged in seven rows and two columns. Each bay can hold one Intel compute node. If this were a W1700, two bays side-by-side can hold a single Power compute node. See Figure 4.
    • To view the system's compute nodes, go to System Console > Hardware > Compute Nodes.
  • Management nodes: Chassis 1 and Chassis 2 each use two bays to host the management nodes:
    • Virtualization System Manager (VSM): Hosted in Node bay 1, it manages the compute nodes' hypervisors. If this were a W1700, this node is the PureFlex System Manager (FSM). See Figure 4 and the Management nodes section.
    • PureSystems Manager (PSM): Hosted in Node bay 2, it hosts IBM Workload Deployer (IWD).
    • To view the system's management nodes, go to System Console > Hardware > Management Nodes.
  • Network switches: Each chassis contains a pair of 66-port IBM Flex System Fabric EN4093 10Gb Scalable Switch Ethernet switches to connect its compute nodes. The chassis switches connect to the top of rack switches via a 40 Gbps Ethernet trunk (which is four 10 GB Ethernet cables).
  • SAN switches: Each chassis contains a pair of 48-port IBM Flex System FC5022 16Gb SAN Scalable Switch fiber channel switches to connect its compute nodes to the system's shared storage.
    • To view the chassis' network and SAN switches, go to System Console > Hardware > Network Devices.
  • Power modules: Each chassis contains six power supplies, arranged three per side. Power is supplied redundantly so that the chassis and its compute nodes keep working even if one power module fails.
  • Chassis cooling devices: These are ten fans to control the hardware's temperature.

Power model

The hardware in W1700 Large Rack is very similar to that in the corresponding W1500 class. Figure 4 shows the layout of the hardware components in a W1700 Large Rack system rack.

Figure 4. IBM PureApplication System W1700-608 hardware
IBM PureApplication System W1700-608 hardware

This hardware is much like that in a W1500 Large Rack, the main difference being that a W1700 contains Power compute nodes instead of Intel compute nodes. Compared to an Intel compute node, a Power compute node contains twice as many cores and twice as much memory. The case that houses it is twice as wide and so it fills two horizontal bays in the chassis, so each chassis holds half as many Power compute nodes. Other than the compute nodes, the storage and network are the same.

Another difference is that virtualization management node is a PureFlex System Manager (FSM), not a Virtualization System Manager (VSM). See the Management nodes section.

Smaller rack

This hardware in the W1500 Small Rack is a subset of that in the W1500 Large Rack. Figure 5 illustrates the layout of the hardware components in a W1500 Small Rack system rack.

Figure 5. IBM PureApplication System W1500-64 hardware
IBM PureApplication System W1500-64 hardware

As shown in Figure 5, a W1500 Small Rack is a small rack (a 25U cabinet, 1.267 m tall by 605 mm wide by 997 mm deep, weighing 385 kg, fully loaded) that contains the same main component types as its larger sibling:

  • Two top of rack switches
  • One service laptop
  • Four power modules
  • One storage unit (a controller/expansion pair)
  • One Flex chassis:
    • Four management nodes
    • Bays for up to ten compute nodes

This hardware is very much like that in a W1500 Large Rack, with some differences:

  • A shorter, slightly narrower cabinet (25U vs. 42U)
  • One chassis instead of three
  • A maximum of ten compute nodes
  • One storage unit instead of two
  • The four power modules are stacked horizontally between the storage and the service laptop

The compute nodes in all of the W1500s are the same, and the storage and network work the same.

The hardware in the W1700 Small Rack (that is, the Power small rack) is very similar to that in the W1500 Small Rack (that is, the Intel small rack). The main difference is that rather than containing Intel compute nodes, the single chassis has room for up to five Power compute nodes.

Hardware resiliency

A general theme within the system hardware is that components are redundant for resiliency to avoid a single point of failure. The system not only contains multiple compute nodes, but it also contains two pairs of management nodes, two system network switches, two storage units, and four PDUs. The Large Rack class systems contain three Flex chassis. Each includes a pair of network switches, a pair of SAN switches, and six power supplies. The network and SAN adapters in the compute nodes have multiple ports to increase bandwidth and resiliency.

The hardware also isolates the management of the system from the user workloads. The management nodes – PureSystems™ Manager and the Virtualization System Manager or PureFlex™ System Manager - are hosted in their own compute nodes. This isolates them from user workloads so that the system management functions run in their own dedicated hardware. It also removes most management overhead from the standard compute nodes so that their resources are dedicated to user workloads. In case of failure, there are two pairs of management nodes, with one on standby as a backup for the other.


System components

Let's look at the individual hardware components in greater detail.

W1500 compute node

A compute node, also known more generally as an integrated technology element (ITE) and sometimes referred to as a "blade", specifically one that has not been specialized as a management node, is a very compact computer. The W1500 system contains Intel compute nodes, specifically the IBM Flex System x240 Compute Node, each of which contains these components:

  • CPU: An Intel compute node contains a dual processor, 16 core chip set. The chips are two 8 core 2.6 GHz Intel® Xeon® E5-2670 115W processors, for a total of 16 physical cores that the hypervisor uses as 32 logical cores (that is, the hypervisor can run 32 concurrent threads in these 16 cores). Of the 32 logical cores, 28 are available for use by the user workloads.
  • Memory: An Intel compute node contains 256 GB of RAM: 8 2x16 GB, 1333 MHz, DDR3, LP RDIMMS (1.35 V).
  • Storage: A compute node's SAN interface card is an IBM Flex System FC3172 2-port 8 GB FC Adapter fiber channel adapter. The node also contains two 250 GB 2.5 inch hard disk drives that are unused and ignored by the system.
  • Networking: A compute node's network interface card is a 4-port IBM Flex System CN4054 10 GB Virtual Fabric Adapter Ethernet adapter.
  • Housing: An Intel compute node case is half wide, meaning that each compute node fills a single chassis bay and two can sit side-by-side in adjacent bays (see Figure 3 and Figure 5).

Figure 6 illustrates these components in a compute node.

Figure 6. Compute node components
Compute node components

W1700 compute node

The W1700 system contains Power compute nodes, specifically the IBM Flex System p460 Compute Node, which contains these components:

  • CPU: A Power compute node contains a quad processor, 32 core chip set. The chips are four 8 core 3.61 GHz POWER7+ processors, for a total of 32 physical cores that the hypervisor uses as 128 logical cores (that is, 128 concurrent threads). Of the 128 logical cores, 116 are available for use by the user workloads.
  • Memory: A Power compute node contains 512 GB of RAM: 16 2x16 GB, 1066 MHz, DDR3, LP RDIMMS (1.35 V).
  • Storage: This is the same adapter that an Intel compute node contains, but this has two adapters. A compute node's SAN interface cards are two IBM Flex System FC3172 2-port 8 GB FC Adapter fiber channel adapters. The node also contains two 250 GB 2.5 inch hard disk drives that are unused and ignored by the system.
  • Networking: This is a similar adapter that an Intel compute node contains, and this has two adapters. A compute node's network interface cards are two IBM Flex System EN4054 4-port 10 GB Ethernet Adapters.
  • Housing: A Power compute node case is full wide, meaning it is twice as wide as an Intel compute node. Each Power compute node fills a horizontal pair of chassis bays (see Figure 4).

A W1700 compute node, compared to W1500 compute node, contains twice as many cores and twice as much memory. Because it is also twice as big, the rack holds half as many.

Each compute node has access to system resources shared by all compute nodes: storage and networking.

Shared resource: Storage

A PureApplication System tall rack provides 6.4 TB of solid-state drive (SSD) storage and 48 TB hard disk drive (HDD) storage; of that, 4.8 TB and 43.2 TB are useable:

  • This storage is housed in a cluster of two IBM Storwize V7000 storage units. Each unit contains a controller node (also known as enclosure) paired with an expansion node.
  • Each node contains two node canisters, configured as active/standby. The active canister controls access to the node's storage.
  • The disks in the four nodes combined are 16 * 400 GB 2.5" SSD and 80 * 600 GB 2.5" HDD.
  • The controllers include IBM System Storage® Easy Tier® storage management system software.
  • The storage is organized into redundant array of independent disk (RAID) 5 arrays for redundancy. Each storage unit contains 40 HDDs and 8 SSDs.
    • Of the 40 HDDs, one is set aside as a hot-spare, with the 39 remaining organized into three 13-disk arrays with stripes of 12 data segments and 1 parity segment.
    • The 8 SDDs are a hot-spare and a 7-disk array with stripes of 6 data segments and 1 parity segment.
  • The compute nodes access the storage as a SAN via 2-port 8 GB fiber channel adapters.

Shared resource: Networking

The PureApplication System's internal physical network is accessed via the two top of rack switches (ToRs), two IBM System Networking RackSwitch G8264 64-port Ethernet switches, with a maximum bandwidth of 320 Gbps between the switches and the external network. Their configuration is shown on the Customer Network Configuration page (go to System Console > System > Customer Network Configuration). Here is how the ports are used:

  • Ports 41-56 (16 ports total) on each switch are open for connecting to the data center network:
    • Each port is 10/1 GB Ethernet. The built-in connector type is copper, but each port can also be wired with a connector for fiber optic or direct access connect (DAC).
    • Pairs of ports in the two switches should be link aggregated for high availability.
  • Port 63 (on either switch) connects the service laptop, which IBM uses to bootstrap and administer the system.
  • Port 64 (link aggregated) is the management LAN port.
  • The other ports on the system switches provide Ethernet connectivity between the three chassis' network switches for the application and management networks and also connect the two ToR switches to each other.

Management nodes

The PureSystems Manager (PSM) not only hosts the workload deployer that deploys patterns, but also hosts the administration services for the system. These services are accessible through three interfaces:

  • Console: The integrated console, a web GUI.
  • REST API: The representational state transfer application programming interface.
  • CLI: The command-line interface.

These interfaces access the PSM via its IP address, the floating management IP address shown on the Customer Network Configuration page (System Console > System > Customer Network Configuration). Opening a web browser on this IP address opens the system's integrated console.

The virtualization management nodes - Virtualization System Manager (VSM, on the W1500) and the PureFlex System Manager (FSM, on the W1700) - manage the hypervisors. The hypervisor management on the W1500 and W1700 is equivalent, but it works somewhat differently on the two models. The two models' different chip sets, Intel® vs. Power, run different hypervisor software: VMware vs. PowerVM. The VSM and FSM run the same hardware, but run run different hypervisor management software, VMware vCenter vs. PowerVM, respectively.

Table 2 summarizes the differences in the two types of virtualization management nodes.

Table 2. Virtualization management node comparison
W1500 IntelW1700 Power
Processor Intel Xeon POWER7+
Hypervisor software VMware vSphere Hypervisor (ESXi) IBM PowerVM
Virtualization management node Virtualization System Manager (VSM) PureFlex System Manager (FSM)
Hypervisor management software VMware vCenter Server PowerVM VMControl

Despite these differences, the PureSystems Manager (PSM) uses this hypervisor management software in the same way in both models.

Management LAN port

The management LAN port enables administrators to connect to PureSystems Manager (PSM), including the integrated console. The Customer Network Configuration page specifies the management port, which is always port 64 (link aggregated) on the top of rack switches. That management port is a member of the customer management network, which is listed in Table 3. The top of rack switch is configured with the VLAN ID for the customer management network in the VLAN field of the Aggregate Port 64 configuration.

Table 3. Customer management network
NameNetwork nameVLAN ID
Customer management CUSTMGMT Customer specified

This customer management network can use any available VLAN ID as specified by the network administrator. The VLAN needs to be defined on the external network to enable administrators to access the PSM.

Management networks

The system requires three VLANs that it uses internally to manage its components. The Customer Network Configuration page lists these as the Internal Network VLANs. Each of these management VLANs is also listed among the other VLANs on the Virtual Networks page (System Console > Hardware > Virtual Network). Table 4 lists these management VLANs.

Table 4. Management networks
NameNetwork nameVLAN ID
Mobility VMOTION 1358
Console CONSOLE 3201
Management MERION 4091

To make sure these three VLAN IDs are unique on the network, they should be reserved on the network so that it does not use them for any other VLANs. At the very least, network VLANs with these IDs will not be able to connect to the system because the top of rack switches will block the traffic.

In addition to these system-wide management networks, each cloud group (a virtualization feature of PureApplication System) requires its own management VLAN. Those VLAN IDs should also be reserved, or at least will be blocked by the top of rack switches.

Application networks

The Customer Network Configuration page also enables the administrator to define the VLANs that the customer workloads will use to intercommunicate, and to associate each application VLAN with ports or link aggregation in the top of rack switches. When you add or remove the VLANs configured in the switches, the system takes a few minutes to reconfigure its top of rack and network switches, and then the system will recognize the VLAN changes.

These application VLANs must be defined on the network to enable communication between parts of the application that are not running on the system (such as the client GUIs and enterprise databases) and the parts running as workloads on the system.


Conclusion

This article reviewed the hardware included in PureApplication System. It explained the main hardware components, described their details and relationships, and showed how to find them in the integrated console. With this information, you now have a better understanding of the hardware included in your PureApplication System.

Acknowledgements

The author would like to thank the following IBMers for their help with this article: Hendrik van Run, Jose Altuve, Jim Robbins, Ajay Apte, and James Kochuba.

Resources

Comments

developerWorks: Sign in

Required fields are indicated with an asterisk (*).


Need an IBM ID?
Forgot your IBM ID?


Forgot your password?
Change your password

By clicking Submit, you agree to the developerWorks terms of use.

 


The first time you sign into developerWorks, a profile is created for you. Information in your profile (your name, country/region, and company name) is displayed to the public and will accompany any content you post, unless you opt to hide your company name. You may update your IBM account at any time.

All information submitted is secure.

Choose your display name



The first time you sign in to developerWorks, a profile is created for you, so you need to choose a display name. Your display name accompanies the content you post on developerWorks.

Please choose a display name between 3-31 characters. Your display name must be unique in the developerWorks community and should not be your email address for privacy reasons.

Required fields are indicated with an asterisk (*).

(Must be between 3 – 31 characters.)

By clicking Submit, you agree to the developerWorks terms of use.

 


All information submitted is secure.

Dig deeper into Cloud computing on developerWorks


  • Bluemix Developers Community

    Get samples, articles, product docs, and community resources to help build, deploy, and manage your cloud apps.

  • Cloud digest

    Complete cloud software, infrastructure, and platform knowledge.

  • DevOps Services

    Software development in the cloud. Register today to create a project.

  • Try SoftLayer Cloud

    Deploy public cloud instances in as few as 5 minutes. Try the SoftLayer public cloud instance for one month.

static.content.url=http://www.ibm.com/developerworks/js/artrating/
SITE_ID=1
Zone=Cloud computing
ArticleID=858717
ArticleTitle=A tour of the hardware in IBM PureApplication System
publish-date=10202013