Preview: IBM System p 575 IBM POWER6 supercomputing nodeIBM United States Hardware Announcement 107-675
November 6, 2007
|Table of contents||Document options|
|At a glance|
IBM plans to offer the System p 575 with:
- 4.7 GHz dual-core IBM POWER6 processors in a 32-core configuration
- 2U, 24-inch rack-mounted, water-cooled node design
- 32 MB of Level 3 cache and 8 MB of Level 2 cache per microprocessor dual-core chip
- Up to 256 GB of DDR2 memory per node
- Up to 5093 GB of internal disk storage
- Up to 18 hot-swap disk bays and 22 hot-swap PCI-X slots
- Up to four hot-swap PCI-E slots
- Redundant rack power subsystem
- Dynamic logical partitioning (DLPAR)
- Optional Advanced POWER Virtualization
Back to top
In the System p 575 (9125-F2A), IBM plans to deliver a 32-core, 4.7 GHz POWER6 high-bandwidth supercomputing node, ideal for many high-performance computing applications.
The p575 will be packaged in a super-dense 2U form factor with up to 14 nodes installed in a 42U-tall, 24-inch water-cooled rack. Multiple racks of p575 nodes can be combined to provide a broad range of powerful cluster solutions.
Plans are for the symmetric multiprocessor (SMP) node to use innovative, 64-bit, dual-core POWER6 microprocessors in a 4.7 GHz 32-core configuration. Also planned is for each microprocessor dual-core chip to be supported by 32 MB of Level 3 cache, 8 MB of Level 2 cache, and up to 4 DDR2 memory DIMMs. Each 32-core node will include 64 slots for memory DIMMs. Memory sizes will be offered from 4 GB up to 256 GB. With the optional I/O Assembly with PCI and I/O drawer attachment capability, along with an I/O drawer, up to 24 PCI cards and up to 18 disk drives per node will be available.
Other planned features include:
- Two integrated SAS controllers
- Integrated service processor
- Four 10/100/1000 Ethernet ports
- Optional Dual 10 Gb Optical Ethernet
- Optional Advanced POWER Virtualization
- Optional dual 2-port 4x Host Channel Adapter
The p575 is planned to support AIX® and Linux operating systems. These operating systems are
planned to run simultaneously in different partitions within the p575 node.
Back to top
One or more of the following operating systems:
Back to top
Previews provide insight into IBM plans and direction. Availability, prices, ordering information, and terms and conditions will be provided when the product is announced.
All statements regarding IBM's plans, directions, and intent are subject to change or withdrawal
without notice, and represent goals and objectives only.
Back to top
The p575 is planned to be characterized by innovative, elegant conceptual design and packaging. Mounted in a sleek 2U enclosure, the modular p575 is planned to allow users to deploy up to 14 water-cooled nodes in a single, 24-inch system frame.
The heart of the supercomputing node is planned to contain 32 POWER6 processors on 16 dual-core microprocessor chips. Each microprocessor dual-core chip is planned to be supported by 32 MB of Level 3 cache, 8 MB of Level 2 cache, and up to four DDR2 memory DIMMs. Each 32-core node will include 64 slots for memory.
To increase the rack density of the p575 supercomputing node, the microprocessors are planned to be cooled with an innovative modular water cooling design and distribution system. Included in the system side water cooling loop is an integrated rear door heat exchanger that is planned to significantly reduce the heat load to air from the system.
At the back of the node enclosure, selectable I/O assemblies are planned to deliver two small form factor disk bays, four 10/100/1000 integrated Ethernet ports, and optional 10 Gb optical Ethernet and optional 12X ports for attaching an I/O expansion drawer. Up to two of the optional dual 2-port 4x Host Channel Adapters are planned for the high-performance interconnect.
The p575 is planned to be capable of supporting AIX and Linux operating systems. These operating
systems are planned to run simultaneously in different partitions within the p575 node.
Back to top
The p575 is planned to be positioned as a substantially enhanced POWER6 follow-on to the p5-575 POWER5+ cluster node. Like the p5-575, the p575 is positioned as an extremely effective solution to the requirements of the most demanding, memory bandwidth-intensive HPC applications.
Back to top
This preview provides insight into IBM plans and direction. Availability, prices, ordering information, and terms and conditions will be provided when the product is announced.
All statements regarding IBM's plans, directions, and intent are subject to change or withdrawal without notice, and represent goals and objectives only.
Planned hardware requirements
Minimum system configuration: The planned minimum configuration for a System p5 575 (9125-F2A) includes the following items (for the first node in a rack). Features identified with an "*" are planned to allow alternate features to be selected for minimum configuration requirements.
- 1X System p 575 (9125-F2A)
- 1X 0/32-Core 4.7 GHz POWER6 CPU (#7298)
- 32X One CPU activation for #7298 (#7299)
- 1X I/O Assembly without PCI capability (#6319)*
- 1X Power and Ethernet cables EIA 05 (#6338)*
- 1X 0/4 GB DDR2 Memory (4 x 1.0 GB) DIMMs (#5693)*
- 2X Activation of 1 GB DDR2 POWER6 Memory (#5680)*
- 2X 73 GB 10K RPM SAS SFF Disk Drive (#1881)*
- 1X Language specify (#93XX)*
- 1X System Rack with base power (#5770)
- 1X Rack Content Specify 9125-F2A (#0271)
- 1X Water cooling option (#6872)
- 1X Slim Doors with integrated rear door heat exchanger (#6871)*
- 1X Ethernet Cable, 6M, HMC to System (#7801)*
- 2X Line Cord, 4AWG, 14 ft, 100A Plug (#8696)*
Planned software requirements
One or more of the following operating systems:
Some of the following features are planned to be announced with the initial p575 product announcement or some may be announced at a later time. They are presented here to give a better understanding of the Preview product.
System p 575 (9125-F2A): The p575 is planned to consist of the following major components:
- A 2U-tall central electronics complex (CEC) housing the system backplane, power supply, cooling components (blowers and water-cooled coldplates and manifolds), and system electronic components. A 0/32-core POWER6 processor with 32 processor activations and 64 DDR2 memory DIMM slots are planned to be on the system backplane.
- Selectable I/O assemblies: One assembly is planned to have PCI and I/O drawer attachment capability (#6399), and one will not have PCI capability (#6319). Both I/O assemblies are planned to have four 10/100/1000 Ethernet ports per node and two small form factor disk bays.
- The I/O assembly with PCI capability is planned to support up to two I/O riser features. The I/O riser features have either PCI-E (8x and 16x) slots, or have one PCI-X slot and one PCI-E-16x slot.
- Up to one 12X attached I/O drawer containing 20 PCI-X slots and 16 hot-swap disk bays, when connected to the I/O assembly (#6399) is planned. A p575 may be connected to one half of an I/O drawer in dual-loop mode. The second half of an I/O drawer may be connected to another p575 node. Attached I/O drawers must be in the same rack as the node.
- A planned 42U-tall, 24-inch System Rack (#5770) ordered on the first p575 (9125-F2A) node in the system.
- A planned redundant power subsystem housed in the top 10U of the rack.
Planned CEC: The p575 CEC is planned to be a 2U-tall, water-cooled, 24-inch rack-mounted device. It will house the system processors, memory, system support processor, power supply, blowers, water cooling hardware, and selectable I/O assemblies with disk bays and (HMC) connectivity.
- The p575 is planned to use DDR2 memory DIMMs. All memory must be fully activated with the corresponding quantity of 1 GB memory activation features.
- Supported DDR2 memory configurations on the 575 are as follows:
Feature number Description Quantity Total memory 5693 0/4 GB (4 x 1 GB) DIMMs 1 4 GB 5693 0/4 GB (4 x 1 GB) DIMMs 8 32 GB 5693 0/4 GB (4 x 1 GB) DIMMs 16 64 GB 5694 0/8 GB (4 x 2 GB) DIMMs 8 64 GB 5694 0/8 GB (4 x 2 GB) DIMMs 16 128 GB 5698 0/16 GB (4 x 4 GB) DIMMs 8 128 GB 5698 0/16 GB (4 x 4 GB) DIMMs 16 256 GB
Planned racks, power, and cooling
- The System Rack with base power (#5770) is planned to be a 24-inch rack with an integrated power subsystem to support the p575 nodes. It will provide 42U of rack space and will house the nodes, the water cooling option, I/O drawers, and the power subsystem.
- The water cooling option (#6872) is planned to include the modular water units, system side supply and return water manifolds, and associated hoses.
The System Rack with base power (#5770) must have door assemblies. Doors kits containing front and
rear doors will be available in either slim line or acoustic styles. The door kits include a rear
door with an integrated rear door heat exchanger. The integrated rear door heat exchanger is on the
system side cooling loop.
- The Slim Line Door Kit will provide a minimized footprint for use where conservation of space is desired.
- The Acoustic Door Kit will provide additional acoustic dampening for use where a quieter environment is desired.
- The height and weight of the System Rack with base power (#5770) may require special handling when shipping by air, when moving under a low doorway, or when using certain elevators. The Compact Handling Option (#7960), Shipping Depth Reduction Option (#7964), and Shipping Weight Reduction Option (#6850) are planned to be available if deemed required during the Systems Assurance Review.
- The System Rack with base power (#5770) is planned to utilize redundant power throughout its design. It will implement redundant bulk power assemblies, Bulk Power Regulators, power controllers, Power Distribution Assemblies, and associated cabling. The power subsystem will provide redundant 350 V dc power to the 575 nodes and I/O drawers. These bulk power assemblies are mounted in front and rear positions and occupy the top 10U of the rack. The feature 5770 includes the system rack, the two bulk power controller assemblies, the first two bulk power regulators, and the bulk power Ethernet hubs.
- Power and Ethernet cable features will be used to connect the dc power converters to the bulk power assembly and to the Ethernet hubs in the System Rack. These cable groups contain two UPICs (universal power input cables) and two Ethernet cables.
- The initial order racking sequence is planned to have the water cooling option (#6872) in the bottom 4U, followed by any I/O drawer features in the configuration, with the p575 nodes placed above any I/O drawers.
- Bulk Power Regulators (#6333) are planned to interface to the bulk power assemblies to help ensure proper power is supplied to the system components. Bulk Power Regulators are always installed in pairs in the front and rear bulk power assemblies to provide redundancy. The number of Bulk Power Regulators required is configuration-dependent based on the number of p575 nodes and I/O drawers installed. Two bulk power regulators are included in the System Rack with base power (#5770). Additional bulk power regulators are ordered as feature 6333.
- Up to four line cords are planned to be required per System Rack with base power (#5770), and these line cords must be identical features.
- The p575 is planned to be capable of operating voltages (3-phase V ac at 50/60 Hz) of 200 to 240 V, 380 to 415 V, or 480 V.
- Up to four optional Bulk Power Distribution Assemblies (#6328) are planned to be required for system racks with a maximum configuration.
Planned logical partitioning
- Logical partitioning (LPAR) is planned to allow the p575 node resources to be allocated and for multiple instances of the supported operating systems to be run simultaneously on a single node.
- LPAR allocation, monitoring, and control is planned to be provided by the HMC.
- Each LPAR is planned to function under its own instance of the operating system.
Planned Optional Advanced POWER Virtualization and Partition Load Manager
- Advanced POWER Virtualization allows partitions to be created that are in units of less than one CPU (sub-CPU LPARs) and allows the same system I/O to be virtually allocated to these partitions.
- With Advanced POWER Virtualization, the processors on the system can be partitioned into as many as 10 LPARs per processor.
- Optional Advanced POWER Virtualization includes Partition Load Manager, which will provide cross-partition workload management across the system LPARs.
- An encrypted key is planned to be supplied to the customer and installed on the system, authorizing the partitioning at the subprocessor level.
- Using Advanced POWER Visualization, the p575 is planned to be divided into as many as 320 LPARs. System resources can be dedicated to each LPAR.
Planned system control
- Each p575 node must be connected to an HMC for system control, LPAR, and service functions. The HMC is planned to be capable of supporting multiple POWER6 nodes.
- Each p575 node is planned to be connected to two HMCs for redundancy, if desired.
- Each p575 connects to the integrated Ethernet hubs within the System Rack (#5770). The hubs are planned to be connected to the HMC through two Ethernet Cable, HMC Attaches (#7801 or #7802).
Planned I/O drawers
- The p575 is planned to utilize optional 4U-tall remote I/O drawers (#5798) for additional directly attached PCI-X adapters and SCSI disk capabilities.
- Each I/O drawer is planned to be divided into halves. Each half contains 10 blind-swap PCI-X slots and two Ultra3 SCSI 4-pack backplanes for a total of 20 PCI slots and up to 16 hot-swap disk bays per drawer.
- A maximum of one I/O drawer can be connected to a p575 node.
- One single-wide, blind-swap cassette (equivalent to those in #4599) will be provided in each PCI-X slot of the I/O drawer. Cassettes not containing an adapter will be shipped with a filler card installed to ensure proper environmental characteristics for the drawer. If additional single-wide, blind-swap cassettes are needed, feature number 4599 should be ordered.
- All 10 PCI-X slots on each I/O drawer planar is planned to be capable of supporting either 64-bit or 32-bit PCI or PCI-X adapters. Each I/O drawer planar is planned to provide 10 PCI-X slots capable of supporting 3.3 V signaling PCI or PCI-X adapters operating at speeds up to 133 MHz.
- Each I/O drawer planar incorporates two integrated Ultra3 SCSI adapters for direct attachment of the two 4-pack hot-swap backplanes in that half of the drawer. These adapters do not support external SCSI device attachments.
Planned I/O drawer attachment
- A system I/O drawer will always be connected to the model 575 CEC via the 575 I/O assembly (#6399). Drawer connections are always made in loops to help protect against a single point-of-failure resulting from an open, missing, or disconnected cable. Systems with non-looped configurations could experience degraded performance and serviceability.
- I/O drawers are planned to be connected to the CEC in either single-loop or dual-loop mode. Dual-loop mode is recommended whenever possible as it will provide the maximum bandwidth between the I/O drawer and the CEC.
- Single-loop mode connects an entire I/O drawer to the CEC via one 12X loop (two ports). The two I/O planars in the I/O drawer are connected via a short 12X cable (#1829). Single-loop connection requires one loop (two ports) per I/O drawer.
- Dual-loop mode connects a single I/O planar in the drawer to the CEC.
Planned disks, boot devices, and media devices: A minimum of two identical internal small form factor SAS hard disk drives are planned to be required per p575 node. It is highly recommended that these disks be used as mirrored boot devices. This configuration will provide service personnel the maximum amount of diagnostic information if the system encounters errors in the boot sequence.
The products in this announcement are not available for ordering at this time. General Availability (GA) announcements will be made at a later date.
Back to top