An IBM PureApplication System W1500 v1.0 and W1700 v1.0 (hereafter called PureApplication System) is a cloud computing system in a box, complete with both hardware and software to deploy and execute workloads in a cloud, everything you need to add a private cloud environment to an enterprise data center. This article gives an overview of the hardware included in PureApplication System, using the system console to view the various components.
This article is the first of three articles that explains the hardware and software foundation that PureApplication System provides for hosting application runtime environments:
- Hardware: The article that you are reading now explains the hardware that comprises of PureApplication System.
- Virtualized hardware: Best practices using infrastructure as a service in IBM PureApplication System explains how PureApplication System virtualizes its hardware to implement infrastructure as a service (IaaS).
- Runtime environments: Managing application runtime environments in IBM PureApplication System explains how the virtualized hardware in PureApplication System is used to implement application runtime environments that workloads are deployed into.
Each article builds on its predecessor to explain this foundation fully.
There are currently three classes of PureApplication System:
- W1500-32 and -64: The W1500 in a short rack with either 32 or 64 Intel® CPU cores.
- W1500-96 thru -608: The W1500 in a tall rack with 96, 192, 384, or 608 Intel CPU cores.
- W1700-96 thru -608: The W1700 in a tall rack with 96, 192, 384, or 608 Power® CPU cores.
Table 1 shows a quick comparison of the hardware in those classes.
Table 1. PureApplication System classes hardware
|W1500-32 and -64||W1500-96 thru -608||W1700-96 thru -608|
|Rack||25U - 1.3 M 19"||42U - 2.0 M 19"||42U - 2.0 M 19"|
|Node Chassis||1 Flex Chassis||3 Flex Chassis||3 Flex Chassis|
|Processor||Intel Xeon E5-2670 8 core||Intel Xeon E5-2670 8 core||POWER7+ 8 core|
|Compute nodes||2 or 4||6, 12, 24, or 38||3, 6, 12, or 19|
|CPU cores||32 or 64||96, 192, 384, or 608||96, 192, 384, or 608|
|Memory||0.5 or 1.0 TB RAM||1.5, 3.1, 6.1, or 9.7 TB RAM||1.5, 3.1, 6.1, or 9.7 TB RAM|
|Storage nodes|| 1 V7000
1 V7000 Expansion
| 2 V7000
2 V7000 Expansions
| 2 V7000
2 V7000 Expansions
|Storage drives|| 6 400 GB SSDs|
40 600 GB HDDs
| 16 400
80 600 GB HDDs
| 16 400
80 600 GB HDDs
|Storage capacity|| 2.4 TB SSD|
24.0 TB HDD
| 6.4 TB
48.0 TB HDD
| 6.4 TB
48.0 TB HDD
|Management nodes|| 2
PureSystems Managers (PSMs)|
2 Virtualization System Managers (VSMs)
PureSystems Managers (PSMs)|
2 PureFlex System Managers (FSMs)
|Network||2 IBM RackSwitch 64-port 10 Gb Ethernet switches|
|Power||4 Power Distribution Units (PDUs)|
Each model is upgradeable within its class to a maximum of W1500-64, W1500-608, and W1700-608, respectively.
This section describes the hardware in the overall PureApplication System rack, in a compute node, and the system's shared resources.
You can see a live view of the hardware in your particular PureApplication System in the administration console. To do so, open the Infrastructure Map and select System Console > Hardware > Infrastructure Map, as shown in Figure 1. To access the Hardware menu, your user needs to be granted the Hardware administration role, as described in Managing administrative access in IBM PureApplication System.
Figure 1. System hardware menu
The Infrastructure Map can be viewed as an interactive picture, as shown in Figure 2, and as a hierarchical tree of components. For more information, see the Viewing the hardware Infrastructure Map and Infrastructure Map data topics in the PureApplication System Information Center.
Figure 2. W1500-96 infrastructure map - graphical view
Similar to the Infrastructure Map graphical view, Figure 3 illustrates the layout of the hardware components in a W1500-96 thru -608 system rack.
Figure 3. IBM PureApplication System W1700-608 hardware
As shown in the Infrastructure Map and Figure 3, a PureApplication System is a rack (a 42U cabinet that is 2.015 m tall by 644 mm wide by 1100 mm deep, weighing 1088 kg fully loaded) that contains these main components from the top down:
- Top of rack switches: This is a pair of IBM System
Networking RackSwitch™ G8264 64-port 10 Gb Ethernet switches.
- To view these, go to System Console > Hardware > Network Devices. The network devices page also lists each chassis' network and SAN switches.
- To see details about the network configuration, go to System Console > System > Customer Network Configuration.
- Storage nodes: This is a pair of IBM Storwize®
V7000 storage units, each of which is a controller node paired with an
expansion node, clustered and managed as a single SAN.
- To view these, go to System Console > Hardware > Storage Devices.
- Flex chassis: The system contains three IBM Flex
System™ Enterprise Chassis Type 7893 chassis, each 10U high
(numbered 3, 2, and 1, with 1 at the bottom). A chassis is like a
docking station for compute nodes. Compute nodes fit into the chassis
like drawers in a filing cabinet. When a node is inserted into a bay,
corresponding connectors in the node and the bay snap together. This
design facilitates replacing a compute node while the system is running.
- To view these, go to System Console > Hardware > Flex Chassis.
- Service laptop: A laptop computer connected to the system is stored in a 1U drawer in the rack between chassis 2 and 3. IBM uses it to administer the system.
- Power distribution units (PDUs): The rack contains four PDUs, each of which plugs into external power separately. They in turn distribute power to the chassis' power modules, the switches, and the storage nodes.
Each Flex chassis contains several components:
- Compute nodes: Each chassis contains fourteen compute
node bays, arranged in seven rows and two columns. Each bay can hold
one Intel compute node. If this were a W1700, two bays side-by-side
can hold a single Power compute node. See Figure 5.
- To view the system's compute nodes, go to System Console > Hardware > Compute Nodes.
- Management nodes: Chassis 1 and Chassis 2 each use
two bays to host the management nodes:
- Virtualization System Manager (VSM): Hosted in Node bay 1, it manages the compute nodes' hypervisors. If this were a W1700, this node is the PureFlex System Manager (FSM). See Figure 5 and the Management nodes section.
- PureSystems Manager (PSM): Hosted in Node bay 2, it hosts IBM Workload Deployer.
- To view the system's management nodes, go to System Console > Hardware > Management Nodes.
- Network switches: Each chassis contains a pair of 66-port IBM Flex System Fabric EN4093 10Gb Scalable Switch Ethernet switches to connect its compute nodes. The chassis switches connect to the system switches via a 40 Gbps Ethernet trunk (which is four 10 GB Ethernet cables).
- SAN switches: Each chassis contains a pair of 48-port
IBM Flex System FC5022 16Gb SAN Scalable Switch fiber channel switches
to connect its compute nodes to the system's shared storage.
- To view the chassis' network and SAN switches, go to System Console > Hardware > Network Devices.
- Power modules: Each chassis contains six power supplies, arranged three per side. Power is supplied redundantly so that the chassis and its compute nodes keep working even if one power module fails.
- Chassis cooling devices: These are ten fans to control the hardware's temperature.
The hardware in the racks for the W1500-32 and -64 and the W1700-96 thru -608 is very similar to that in the W1500-96 thru -608. Figure 4 shows the layout of the hardware components in a W1500-32 thru -64 system rack.
Figure 4. IBM PureApplication System W1500-64 hardware
This hardware is much like that in a W1500-96 thru -608, with some differences:
- A shorter cabinet (25U vs. 42U)
- One chassis instead of three
- A maximum of four compute nodes
- One storage unit instead of two
The compute nodes in all of the W1500s are the same, and the storage and network work the same.
Figure 5 illustrates the layout of the hardware components in a W1700-96 thru -608 system rack.
Figure 5. IBM PureApplication System W1700-608 hardware
This hardware is much like that in a W1500-96 thru -608, the main difference being that a W1700 contains Power compute nodes instead of Intel compute nodes. Compared to an Intel compute node, a Power compute node contains twice as many cores and twice as much memory. The case that houses it is twice as wide and so it fills two horizontal bays in the chassis, so each chassis holds half as many Power compute nodes. Other than the compute nodes, the storage and network are the same.
Another difference is that virtualization management node is a PureFlex System Manager (FSM), not a Virtualization System Manager (VSM). See the Management nodes section.
A general theme within the system hardware is that components are redundant for resiliency to avoid a single point of failure. The system not only contains multiple compute nodes, but it also contains two pairs of management nodes, two system network switches, two storage units, and four PDUs. It contains three Flex chassis that each includes a pair of network switches, a pair of SAN switches, and six power supplies. The network and SAN adapters in the compute nodes have multiple ports to increase bandwidth and resiliency.
The hardware also isolates the management of the system from the user workloads. The management nodes – PureSystems™ manager and the virtualization system manager - are hosted in their own compute nodes. This isolates them from user workloads so that the system management functions run in their own dedicated hardware. It also removes most management overhead from the standard compute nodes so that their resources are dedicated to user workloads. In case of failure, there are two pairs of management nodes, with one on standby as a backup for the other.
The PureSystems Manager (PSM) not only hosts the workload deployer that deploys patterns, but also hosts the administration services for the system. These services are accessible through three interfaces:
- Console: The administration console, a web GUI.
- REST API: The representational state transfer application programming interface.
- CLI: The command-line interface.
These interfaces access the PSM via its IP address, the floating management IP address shown on the Customer Network Configuration page (System Console > System > Customer Network Configuration). Opening a web browser on this IP address opens the system's administration console.
The hypervisor management on the W1500 and W1700 is equivalent, but it works somewhat differently on the two models. The two models' different chip sets, Intel® vs. Power, run different hypervisor software: VMware vs. PowerVM. The virtualization management nodes, called the Virtualization System Manager (VSM) and PureFlex System Manager (FSM), respectively, are different but similar hardware that run different hypervisor management software, VMware vCenter vs. VMControl, respectively.
Table 2 summarizes the differences in the two types of virtualization management nodes. Despite these differences, the PureSystems Manager (PSM) uses this hypervisor management software in the same way in both models.
Table 2. Hypervisor management comparison
|W1500 Intel||W1700 Power|
|Hypervisor software||VMware vSphere Hypervisor (ESXi)||IBM PowerVM|
|Virtualization management node||Virtualization System Manager (VSM)||PureFlex System Manager (FSM)|
|Hypervisor management software||VMware vCenter Server||IBM Systems Director V126.96.36.199 running the VMControl plug-in V188.8.131.52|
A compute node, also known more generally as an integrated technology element (ITE), specifically one that has not been specialized as a management node, is a very compact computer. The W1500 system contains Intel compute nodes, specifically the IBM Flex System x240 Compute Node, each of which contains these components:
- CPU: An Intel compute node contains a dual processor, 16 core chip set. The chips are two 8 core 2.6 GHz Intel® Xeon® E5-2670 115W processors, for a total of 16 physical cores that the hypervisor uses as 32 logical cores (that is, the hypervisor can run 32 concurrent threads in these 16 cores). Of the 32 logical cores, 28 are available for use by the user workloads.
- Memory: An Intel compute node contains 256 GB of RAM: 8 2x16 GB, 1333 MHz, DDR3, LP RDIMMS (1.35 V).
- Storage: A compute node's SAN interface card is an IBM Flex System FC3172 2-port 8 GB FC Adapter fiber channel adapter. The node also contains two 250 GB 2.5 inch hard disk drives that are unused and ignored by the system.
- Networking: A compute node's network interface card is a 4-port IBM Flex System CN4054 10 GB Virtual Fabric Adapter Ethernet adapter.
- Housing: An Intel compute node case is half wide, meaning that each compute node fills a single chassis bay and two can sit side-by-side in adjacent bays (see Figure 1 and Figure 2).
Figure 6 illustrates these components in a compute node.
Figure 6. Compute node components
The W1700 system contains Power compute nodes, specifically the IBM Flex System p460 Compute Node, which contains these components:
- CPU: A Power compute node contains a quad processor, 32 core chip set. The chips are four 8 core 3.61 GHz POWER7+ processors, for a total of 32 physical cores that the hypervisor uses as 128 logical cores (that is, 128 concurrent threads). Of the 128 logical cores, 116 are available for use by the user workloads.
- Memory: A Power compute node contains 512 GB of RAM: 16 2x16 GB, 1066 MHz, DDR3, LP RDIMMS (1.35 V).
- Storage: This is the same adapter that an Intel compute node contains, but this has two adapters. A compute node's SAN interface cards are two IBM Flex System FC3172 2-port 8 GB FC Adapter fiber channel adapters. The node also contains two 250 GB 2.5 inch hard disk drives that are unused and ignored by the system.
- Networking: This is a similar adapter that an Intel compute node contains, and this has two adapters. A compute node's network interface cards are two IBM Flex System EN4054 4-port 10 GB Ethernet Adapters.
- Housing: A Power compute node case is full wide, meaning it is twice as wide as an Intel compute node. Each Power compute node fills a horizontal pair of chassis bays (see Figure 5).
A W1700 compute node, compared to W1500 compute node, contains twice as many cores and twice as much memory. Because it is also twice as big, the rack holds half as many.
Each compute node has access to system resources shared by all compute nodes: storage and networking.
Storage: A PureApplication System tall rack provides 6.4 TB of solid-state drive (SSD) storage and 48 TB hard disk drive (HDD) storage; of that, 4.8 TB and 43.2 TB are useable:
- This storage is housed in a cluster of two IBM Storwize V7000 storage units. Each unit contains a controller node paired with an expansion node.
- The disks in the four nodes combined are 16 * 400 GB 2.5" SSD and 80 * 600 GB 2.5" HDD.
- The controllers include IBM System Storage® Easy Tier® storage management system software.
- The storage is organized into redundant array of independent
disk (RAID) 5 arrays for redundancy. Each storage unit
contains 40 HDDs and 8 SSDs.
- Of the 40 HDDs, one is set aside as a hot-spare, with the 39 remaining organized into three 13-disk arrays with stripes of 12 data segments and 1 parity segment.
- The 8 SDDs are a hot-spare and a 7-disk array with stripes of 6 data segments and 1 parity segment.
- The compute nodes access the storage as a SAN via 2-port 8 GB fiber channel adapters.
Networking: A PureApplication System includes two IBM System Networking RackSwitch G8264 64-port Ethernet switches, with a maximum bandwidth of 320 Gbps between the switches and the external network. Their configuration is shown on the Customer Network Configuration page (go to System Console > System > Customer Network Configuration). Here is how the ports are used:
- Ports 41-56 (16 ports total) on each switch are open for
connecting to the data center network:
- Each port is 10/1 GB Ethernet. The built-in connector type is copper, but each port can also be wired with a connector for fiber optic or direct access connect (DAC).
- Pairs of ports in the two switches should be link aggregated for high availability.
- Port 63 (on either switch) is used by the service laptop, which IBM uses to bootstrap and administer the system.
- Port 64 (link aggregated on both switches) is used by the customer to access the PureSystems Manager (PSM), in the administration console.
- The other ports on the system switches provide Ethernet connectivity between the three chassis' switches and also connect the two RackSwitch switches to each other.
The system itself requires three VLANs that the system uses to manage its components. The Customer Network Configuration page (System Console > System > Customer Network Configuration) lists these as the internal network VLANs. Each of these management VLANs is also listed among the other VLANs on the Virtual Networks page (System Console > Hardware > Virtual Network). Table 3 lists these management VLANs.
Table 3. Internal network VLANs
|Name||Network name||VLAN ID|
This article reviewed the hardware included in PureApplication System. It explained the main hardware components, described their details and relationships, and showed how to find them in the administration console. With this information, you now have a better understanding of the hardware included in your PureApplication System.
The author would like to thank the following IBMers for their help with this article: Hendrik van Run, Jose Altuve, Jim Robbins, and Ajay Apte.
- Product information:
- IBM PureApplication System
- IBM PureApplication System Version 1.0 Information Center
- IBM PureSystems Centre
- IBM PureSystems resources on developerWorks
- Best practices using infrastructure as a service in IBM PureApplication System
- Managing application runtime environments in IBM PureApplication System
- Managing administrative access in IBM PureApplication System
- Expert integrated systems blog: Consolidate and optimize diverse workloads using IBM PureApplication System
- Expert integrated systems blog: IBM PureApplication System - Backup and high-availability configuration
- Expert integrated systems blog: Compute node diagnostics from the IBM PureApplication System infrastructure map
- IBM Redbooks:
Bobby Woolf is a Consultant for IBM Software Services for WebSphere (ISSW), focusing on IBM PureApplication System, service-oriented architecture, event-driven architecture, and application integration. He is the author of Exploring IBM SOA Technology and Practice, and a co-author of Enterprise Integration Patterns and The Design Patterns Smalltalk Companion.