Modified on by Gabor_Samu
It's been ages since I've posted to this blog. I've not forgotten about it - I've been figuratively stirring the technical computing goulash pot over on the IBM Systems In the Making blog site.
Having recently moved house, all of the old classic and newer ARM based systems that I've written about previously are still mostly packed away. My hands have been more focused on home improvement rather than tinkering. As those in HPC circles will know, the annual Supercomputing (SC16) event starts this coming Sunday in Salt Lake City, UT. Interestingly, if my memory serves me well the last time we were in Salt Lake City for SC12, I was a newbie with IBM, having come over from the acquisition of Platform Computing.
The HPC landscape has changed quite a bit since then, including the divestiture of the IBM x86 server business to Lenovo and the birth of the OpenPOWER Foundation. The OpenPOWER Foundation has gone from baby steps to sprinting with a huge and diverse group of members from accelerators, interconnects, research organizations and more - all united on a common goal - to drive innovation and change in enterprise computing and HPC via the OpenPOWER platform. It's like somebody has taken a big wooden spoon and stirred the goulash in the pot - because we all know that if things stand still for too long in the pot, it's going to burn.
As I've banged on about in previous blogs, I'm more pleased than ever to see this explosion of diversity in HPC from A(RM), P(OWER) to X(86). When you throw accelerators such as FPGAs, GPUs into the mix, what is needed more than ever to address this diversity is a software defined approach - which hides this complexity from the users and allows them to leverage the power of today's environments.
IBM Spectrum LSF (formerly Platform LSF) has been making this possible for over 20 years. A glace at the OS and platform support list illustrates the breadth and depth of support. Not only does IBM Spectrum LSF make tying together heterogeneous resources easy, it's proven technology allows organizations to share resources on a global scale. In fact, the latest IBM Spectrum LSF V10 release from June 2016 contained contained numerous enhancements all focused on improving the productivity the users of HPC and controlling costs. Read more in this top 10 cool things about IBM Spectrum LSF blog. And looking beyond HPC, the IBM Spectrum Computing family of products helps provides advanced resource management capabilities for diverse workloads including Hadoop, Spark.
Yours truly will be in Salt Lake City for SC16. Drop by booth 1018 to talk about how IBM software defined computing can help your organization. IBM will be holding a number of user groups, and seminars covering the broad spectrum of IBM solutions for HPC. And for IBM Spectrum LSF users, we'll be holding our annual user group, where you can hear how your peers are using IBM Spectrum LSF to get an advantage, and learn about the latest developments in IBM Spectrum LSF from our experts.
Come on an stir it up! You'll like it!
As we enter a new year, 2016 seems to have been tarnished it the closing month by events around the world. Far be it for me to to talk about world events here, I’d like to focus on the good - at least from my perspective. 2016 was a great year for me. It was the year in which I:
- Moved house
- Upgraded from a late 1980’s to a late 1990’s german station wagon (“estate” for those who speak real English)
- Moved from Blackberry 10 to Android - *blech* - but I’ll admit my HTC 10 is a fantastic piece of hardware
- Decided that I no longer revere Apple products as I once did - before any harsh words, I am writing this on a Macbook Pro Retina…and I have a veritable museum of Apple kit at home
- Learned that you can actually “train” a computer using machine learning frameworks like Caffe, Tensorflow - yes I’ve been tinkering with Caffe on one of my ARM developer boards
- Stuck with Linux for my work laptop even with the tantalizing choice of a shiny new Macbook with OS X
- Entrusted the security of my home internet to the Turris Omnia - because using a router that hasn’t been patched in years is well - silly, to put it politely
- Finally got myself an OpenPOWER t-shirt at ISC High-Performance - which I wear proudly because OpenPOWER rocks!
- Understood that getting the future generations interested in technology is key - and did my part by giving an intro to High-Performance Computing talk at a local school
- Successfully launched IBM Spectrum LSF 10.1 with the help of my many great peers. And yes, it does run on Linux on ARM v7&v8 and Linux on POWER8 :)
And that’s just what I can think of as I write this blog…so for me, 2016 has an aura rather than a tarnish to it.
So as we enter the year of Canada’s 150th birthday with a full head of steam, I’m looking forward to hitching my wagon to some of cool things coming up including:
- Exploring the wonderful national parks of Canada at no charge with my Parks Canada pass
- OpenPOWER and IBM POWER9 )
- Building up of my home ARMy with a pre-ordered Armada 8040 Community Board, which should help to speed up the machine learning I’ve been tinkering with
And that’s just for starters. What’s your plan?
Sifting through boxes of 3.5 inch floppy diskettes - some of questionable provenance in a dusty basement. Gingerly packing up what I consider to be the holy trinity of Commodore Amiga computers - A1000, A2000, A3000 - all in some state of working condition. Of course, back in the day, only Amiga made it all possible - awesome graphic demos, games, word processing, ray tracing to Amiga Unix (AMIX), which was on of the first ports of SVR4 to the MC68000 series processor (yes I do have AMIX installed also).
The frustration watching the "Death Bed Vigil" movie in which Dave Haynie of Commodore Amiga fame gives us a tour through the Commodore engineering at headquarters and of course the fire sale which happened at Commodore Canada on Pharmacy Avenue in Toronto.
Once upon a time, we all carried the respective flags of our favorite platforms - which were varied. It was this rivalry which I think led to
the respective communities squeezing tremendous performance out of these systems.
Then it all seemed to change. Suddenly we were all forced to march to the same clock rhythm - and boredom set in. With this course seemingly set in stone, how to escape this Sturm and Drang?
Well, for me this hope appeared in 2013 with the announcement of the OpenPOWER Consortium - an open technical community built around
the IBM POWER architecture to grow solutions to serve the evolving computing needs of today and the future.
Next week the second annual OpenPOWER Summit takes place in San Jose, United States and if the first event was any indication, this should
be a very exciting event. So, Power Up and strap on your accelerators as we're in for a very interesting ride!
Modified on by Gabor_Samu
I've just returned from International Supercomputing 2014, which took place in Leipzig, Germany. As was the case in 2013, I greatly enjoyed my time at the conference, and the hospitality in Leipzig. It's a wonderful city to visit.
You will have read in my previous blogs about my experiences with ARM based developer systems, and running IBM Platform LSF. For me, ISC 2014 was a very interesting event for one big reason - variety! Variety is the spice of life as they say. And the variety in this case came from the displays at OpenPOWER Foundation members Mellanox and NVIDIA, as well as servers based on the the newly unveiled Applied Micro X-Gene 64-bit ARM processors.
Although small in size, the Tyan POWER8 motherboard with NVIDIA Tesla K40 installed made a strong statement. Although OpenPOWER was founded in 2013, we are already seeing the benefits of this foundation - with a varied member base including education, interconnect, and accelerator vendors - all with an HPC pedigree. With the rich set of members that is growing, these look to be exciting times for the IBM POWER8 processor and the OpenPower Foundation.
For those of you who did not attend, the IBM booth had a number of live demos including the IBM Platform Computing Cloud Service, which is built on top of IBM SoftLayer infrastructure. This service can provide both hybrid and stand-alone clouds and is ideally suited for HPC workloads - as it's non-virtualized.
So we say Auf Wiedersehen to Leipzig for now and look forward to the spice that New Orleans will provide this autumn; where there will surely be more exciting things emerging from the OpenPower Foundation!
These days we often hear about CPUs based upon ARM cores. They can be found in mobile phones, embedded systems, laptops and even servers. Indeed, projects such as Mont Blanc are investigating the use of ARM based systems for High-Performance Computing.
Back in the late 1980’s, I was a student in high-school and a budding Computing Scientist. In those days, my view of the personal computer market was North American centric. Until one day I read about a new desktop computer from the UK know as the Acorn Achimedes. This system was based upon a RISC CPU which was given the name ARM (Acorn RISC Machine). The writeup in the local Toronto Computes! newspaper indicated that Olivetti Canada was bringing the Acorn Archimedes range to North America. As luck would have it, Olivetti was just down the road from me. After after a few phone calls, I was invited to their offices for some hands on time with a top of the line Acorn Archimedes 440. This was the start of my journey with ARM based systems.
The folks at Olivetti were kind enough let me use the “Archie” over a number of days. During that time, I had a chance to try out number of different software products including games, and productivity software. Overall, I was greatly impressed by the Archie and it’s WIMP interface OS, RiscOS. Lucikly, there were a few games to boot, including Zarch.
The only catch for me was the list price of the system. As I recall it was around $2,500 CAD, which for me at the time was prohibitive.
Moving forward to 2014, I’ve recently been tinkering with the ARM-based mini PC UDOO Quad running Debian Wheezy EABI (hard-float). This happens to intersect with another area of interest, Technical Computing.
I’ll share more of my experiences with Udoo Quad in the coming weeks.
People who know me, know that I like to tinker. Whether it’s with cars, computers or other mechanical gizmos, I’ve always enjoyed dismantling and reassembling things. Maintaining classic computers is also a passion, and as you’ve seen in my previous blogs on that topic, I’ve always tried to add an element of High-Performance Computing to the mix. Whether on a classic SPARC based laptop, MIPS smartbook or a modern ARM developer board, there is a sense of achievement in getting such systems installed in 2015 and running a benchmark for example. Even when running a simple home network, in this case with a wild mix of machines, the importance of monitoring is apparent.
For organizations that take the leap and invest in High-Performance Computing infrastructure in support of business needs, monitoring this infrastructure and understanding how it’s being used is of paramount importance. IBM Platform RTM is a comprehensive monitoring, reporting and alerting software for HPC. It takes the guess work out of HPC infrastructure monitoring by aggregating system, workload as well as license consumption information, all in a single tool.
Whether you’re a system admin or a line of business manager, this Technical Brief provides an in-depth look at the importance of comprehensive HPC infrastructure monitoring - which allows organizations to correlate in a single tool workload, system and license consumption metrics.
Modified on by Gabor_Samu
OpenPOWER continues to put the power down and accelerate strongly in 2015. Earlier this year, the First Annual OpenPOWER Summit took place and more recently Cabot Partners published the paper Crossing the Performance Chasm with OpenPOWER, outlining the benefits of OpenPOWER HPC offerings. Reading through that paper, one important point of consideration which stuck out at me were the considerations when choosing a HPC system. It suggests that rather than using point benchmarks, one must consider the performance of workflows across the HPC Data Life Cycle. This seems a very sensible approach actually. Would you choose a car strictly on it's 0-100km/h time? Well, when I was 16 years old probably yes. What about braking, cornering, economy, safety? You need strong performance in all categories. OpenPOWER Foundation achieves just this - by bringing together organizations with broad expertise from accelerators, to interconnects around IBM POWER server technology - which has been made open through the foundation.
IBM Software Defined Infrastructure helps to wield the sword of OpenPOWER for High-Performance Computing workloads. Featuring broad OS/platform support including Linux on POWER (Little Endian), IBM Platform Computing software products provide broad capabilities including application management, infrastructure management, job scheduling as well as monitoring and reporting.
Learn more about the IBM Software Defined Infrastructure for High-Performance Computing on OpenPOWER in this presentation from the OpenPOWER Summit.
Put the POWER down and jump the chasm!
This past June at ISC 2013 the IBM booth featured live demonstration of IBM Platform HPC V3.2 managing an IBM iDataplex cluster equipped with Intel Xeon Phi coprocessors.
As part of the demonstration, the potential performance gains running an application on Intel Xeon Phi coprocessors was shown by running the visually stunning Intel Embree crown rendering on Intel Xeon and Intel Xeon Phi simultaneously.
IBM Platform HPC provides a unified web-based interface for deployment and managment of the cluster. Additionally, it includes application submission templates to allow administrators the flexiblity to create templates to greatly simplify the submission of jobs for their users. A number of templates for well known ISV and open source applications are also included as standard. For ISC, a template was created to allow Intel Embree to be easily launched through the built-in workload manager for execution on Intel Xeon or Intel Xeon Phi coprocessors.
Finally, when the processor intensive Intel Embree application was running, the monitoring and reporting capabilities of IBM Platform HPC provided both real time and historical reporting on the health of each node in the cluster - including metrics specific to the Intel Xeon Phi coprocessor such as temperature, power consumption and utilization - all through a consistent web-based interface.
Enjoy the short video of the demo here.
Modified on by Gabor_Samu
I've always enjoyed a good road trip. There's just something fun about jumping in the car, and heading to a far off location. As they say, half of the fun is just getting to your destination. My latest road trip brought me to Frankfurt for ISC High-Performance 2015.
Crossing all of Austria as well as the southern part of Germany, this trip proved to be no less exciting than the rest. Breaking down about 50 km from Frankfurt due to a dead battery, I was fortunate enough to meet a local family who helped to boost my car so that I could make it in time for the show. Luckily I had some craft beer to reward them for their help. Of course, part of the excitement this time was the fabled Autobahns of Germany. Here I could get up to some decent speeds - legally :)
Refreshments are always needed on long trips...
Frankfurt too had some interesting surprises in store - including the interesting culinary treat Handkäse mit Musik, which is a sour milk cheese served with onions. I'll let you read what the Musik part is all about. There too is the Apfelsaftschorle which I constantly mistook for beer at the ISC High-Performance venue. Such is life :)
For me, where the rubber hit the road was the ISC High-Performance event. The IBM booth (928) featured a refreshing bright yellow colour scheme, like the dawning of a new era of High-Performance Computing built on Data Centric Systems and OpenPOWER. In terms of demos, the IBM booth featured a number of live and static demos including:
- OpenPOWER HPC Server and Cirrascale GPU Developer System
- IBM High Performance Services for HPC
- IBM Data Engine for Analytics
- IBM Watson tranSMART Transational Medicine Solution
- Pluto (astrophysics hydrodynamics/magneto-hydrodynamics) running live on Power8 + GPU
- OpenFOAM (CFD)
- High Performance Storage System (HPSS)
The OpenPOWER hardware that was on the show floor attracted a lot of attention. Many people were impressed to see and touch the two Power8 systems which included technology from OpenPOWER members including Mellanox and NVIDIA. You may have read about my interest in Power and ARM based systems in some of my earlier blogs.
Being part of the Marketing team for IBM Software Defined Infrastructure, I could frequently be found at the IBM High Performance Services for HPC demo point. Here we demonstrated our turnkey cloud solution for HPC workloads built in top of SoftLayer and featuring IBM Platform LSF, Platform Symphony workload management options. The demo leveraged the work done by MINES ParisTech and Transvalor to provide CFD services to French industry. You can read more about how MINES ParisTech and Transvalor leverage the IBM High Performance Services for HPC here.
ISC also offered us the opportunity to showcase the IBM Platform LSF family of products interactive conceptual demo to passersby. Here users could learn that the Platform LSF family is not simply about workload management. Platform Process Manager and Platform Application Center help to boost user productivity through ease of use and simplification.
So what’s next? Toronto to Austin road trip? Yeah, that doesn’t sound like a bad idea.
See y’all in Texas!
Modified on by Gabor_Samu
It’s been ages since my last blog. What better way to start off the new year by looking at the past. In this case, let’s wind the clock all the way back to circa 2001. This was the era of the Intel Pentium 4 processors. However, today we’ll be looking at something far less pedestrian. Based on the scalable processor architecture, the NatureTech 777 GenialStation is an UltraSPARC IIe notebook computer. Why do I have an UltraSPARC IIe based notebook computer? Why not? And it’s oh so cool with it’s lovely blue and gray chassis as opposed to boring old black.
The NatureTech777 notebook boasted such specs as:
SUN UltraSPARC IIe @ 500 MHz w/256-KB L2 Cache
15.0" TFT SXGA LCD Panel
256MB ECC RAM
80GB IDE disk
CD/DVD Combo drive
3.5” Floppy disk drive
5400mAh/ 11.1V. Li-ion Smart Battery Pack (mine is dead)
Built-in H/W Security Controller, 4 button input
A honking noisy fan that always runs at full speed
What can you do with a NatureTech 777 laptop? Well, at this stage of its life, I don’t use it for much apart from tinkering. Back in the day, being able to take SUN Solaris on the road in a portable package was quite impressive and I understand that these systems also went for a premium price at the time.
I was surprised to not find any NatureTech video on YouTube or other such sites. So, I’m pleased to present this beast of a laptop in all its glory booting up Solaris 9 and running Linpack - of course compiled with the requisite SunPro compilers (and SUN math libraries). No speed records broken here of course, and with that fan running constantly in overdrive, I would not expect any thermal issues either :)
I’m lucky enough to have the fancy laptop bag from the manufacturer which proudly proclaims that it’s carrying a SPARC based piece of equipment.
As the SUN sets on this blog, I reminisce about the days of variety in computing - different processors, operating systems - and when RISC was king. Hopefully, we are entering another such era with the rise of ARM, OpenPower, MIPS as well as the others that are out there.
IBM Platform HPC V4.1.1
IBM Platform Cluster Manager V4.1.1
IBM Platform HPC provides the ability to customise the network configuration of compute nodes via Network Profiles. Network Profiles support a custom NIC script for each defined interface.
This provides the ability to configure network bonding and bridging. Here we provide a detailed example on how to configure a network bridge in a cluster managed by IBM Platform HPC.
IBM Platform HPC includes xCAT technology for cluster provisioning. xCAT includes a script
(/install/postscripts/xHRM) which may be used to configure network bridging. This script is leveraged as a custom network script in the example below.
The configuration of the network provision may be viewed in the IBM Platform HPC Web console at:
Resources > Node Provisioning > Networks.
The configuration of network provision may also be viewed using the lsdef CLI:
# lsdef -t network provision
Object name: provision
The Network Profile default_network_profile which includes the network provision may be viewed in the IBM Platform HPC Web console at: Resources > Node Provisioning > Provisioning Templates > Network Profiles.
The Network Profile default_network_profile configuration may also be viewed using the lsdef CLI.
# lsdef -t group __NetworkProfile_default_network_profile
Object name: __NetworkProfile_default_network_profile
Here, we configure a network bridge br0 against eth0 for compute nodes using a new Network Profile.
1. Add a new Network Profile with name default_network_profile_bridge via the IBM Platform HPC Web console. As an Administrator user, browse to Resources > Node Provisioning > Provisioning Templates > Network Profiles and select the button Add.
A total of three devices are required to be added:
o Type: Ethernet
o Network: provision
o Type: BMC
o Network: provision
o Type: Customized
o Network: provision
o Configuration Command: xHRM bridgeprereq eth0:br0
(creates network bridge br0 against eth0)
The new Network Profile default_network_profile_bridge is shown below.
2. Now we are ready to provision the nodes using the new Network Profile default_network_profile_bridge. To begin the process to add nodes, navigate in the the IBM Platform HPC Web console to Resources > Devices > Nodes and select the button Add. Within the Add Nodes window, select optionally Node Group compute and Select Specify Properties for the provisioning template. This will allow you to select the newly created network profile default_network_profile_bridge. Here the hardware profile IPMI and stateful provisioning are used.
Nodes are added using Auto discovery by PXE boot. Nodes may also be added using a node information file.
The nodes are powered on, detected by IBM Platform HPC and provisioned. In this example, two nodes compute000, compute001 are detected and subsequently provisioned.
3. Once the nodes have been provisioned and complete their initial boot, they appear in the IBM Platform HPC Web console (Resources > Devices > Nodes)with Status booted and Workload Agent OK.
The network bridge is configured on the nodes as expected. We may see this via the IBM Platform HPC Web console by browsing to Resources > Devices > Nodes and selecting the Summary tab and scrolling to Other Key Properties.
Finally, using the CLI xdsh, we remotely execute ifconfig on node compute001 to check the configuration of interface br0.
# xdsh compute001 ifconfig br0
compute001: br0 Link encap:Ethernet HWaddr 00:1E:67:49:CC:E5
compute001: inet addr:192.0.2.20 Bcast:192.0.2.255 Mask:255.255.255.0
compute001: inet6 addr: fe80::b03b:7cff:fe61:c1d4/64 Scope:Link
compute001: UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
compute001: RX packets:26273 errors:0 dropped:0 overruns:0 frame:0
compute001: TX packets:42490 errors:0 dropped:0 overruns:0 carrier:0
compute001: collisions:0 txqueuelen:0
compute001: RX bytes:11947435 (11.3 MiB) TX bytes:7827365 (7.4 MiB)
The compute nodes have been provisioned with a network bridge br0 configured.
Modified on by Gabor_Samu
These days it’s not uncommon to hear about CPUs based upon ARM cores. They can be found in mobile phones, embedded systems, laptops and even servers. Indeed, recently there have been a number of major announcements from vendors building processors based ARM cores. This includes the AMD Opteron A1100, NVIDIA Tegra K1 and even the Apple A7, which is used the iPhone 5s. What these all have in common is that they are 64-bit and based on the ARM v8 ISA. At the same time, the ARM-server chip startup Calxeda announced it was shutting down. Surging power requirements, as well as the announcement of 64-bit chips have led to renewed interest in energy efficient ARM based processors for High-Performance or Technical Computing.
When building out an infrastructure for Technical Computing, a workload manager is typically used to control access to the computing resources. As it turns out, the leading workload manager IBM Platfom LSF (formerly Platform Computing) has supported Linux on ARM for about 10 years. In fact, today there are IBM clients using Platform LSF on Linux ARM-based clusters as part of mobile device design and testing.
The current release of IBM Platform LSF 9.1.2 supports Linux on ARM v7 with upcoming support for ARM v8. Given that Platform LSF provides the ability to build out heterogeneous clusters, creating a compute cluster containing ARM, Power and x86 based nodes is a snap. Jobs may be targetted to a specific processor type and the optional portal IBM Platform Application Centre provides an easy to use, highly configurable, application-centric web based interface for job management.
Hello. How do you "doo"?
I’ve recently had the opportunity to test IBM Platform LSF on two node, ARM based cluster . The IBM Platform LSF master node was a Udoo Quad system running Debian “Wheezy” ARMv7 EABI hard-float. The second “node” was running Fedora on a ARM v8 simulator . Installation and operation of the software was identical to other platforms. Using the Platform LSF ELIM (External LIM) facility for adding external load indices, I was able to quickly create a script to load the processor temperature on the Udoo Quad system.
Now, putting Platform LSF through it’s paces, we see the type and model and other physical characteristics of the nodes are detected.
$ lshosts -w
HOST_NAME type model cpuf ncpus maxmem maxswp server RESOURCES
udoo LINUX_ARM ARM7l 60.0 4 875M - Yes (mg)
ma1arms4 LINUX_ARM ARM8 60.0 1 1.8G 1.9G Yes ()
Looking at the load information on the system, we see the built-in load indices, in addition to the cputemp metric which I introduced to report the CPU temperature (Celsius). At this point the system is essentially idle.
$ lsload -l
HOST_NAME status r15s r1m r15m ut pg io ls it tmp swp mem cputemp
udoo ok 0.5 0.6 1.5 4% 0.0 311 1 0 1297M 0M 701M 45.0
ma1arms4 busy 3.6 *7.7 6.2 52% 0.0 50 3 0 954M 1.9G 1.6G 0.0
Next, we submit a job for execution to Platform LSF. Rather than the requisite sleep job, we submit something a bit more interesting, the HPC Challenge Benchmark (HPCC). Debian Wheezy happens to include a pre-compiled binary which is compiled against OpenMPI.
As the Udoo Quad is a 4 core system (as the name implies), hpcc is submitted requesting 4 cores.
$ bsub -n 4 mpiexec -n 4 /usr/bin/hpcc
Job <2> is submitted to default queue <normal>.
With HPCC running, we quickly see the utilization as well as the CPU temperature increase to 60C.
$ lsload -l
HOST_NAME status r15s r1m r15m ut pg io ls it tmp swp mem cputemp
udoo ok 5.1 5.1 2.4 94% 0.0 49 1 0 1376M 0M 497M 60.0
ma1arms4 ok 0.5 1.1 1.2 40% 0.0 50 3 0 954M 1.9G 1.6G 0.0
During the life of the job, the resource utilization may be easily viewed using the Platform LSF user commands. This includes details such as the PIDs which the job is comprised of.
$ bjobs -l
Job <2>, User <debian>, Project <default>, Status <RUN>, Queue <normal>, Comman
d <mpiexec -n 4 /usr/bin/hpcc>, Share group charged </debi
Sun Feb 2 23:49:48: Submitted from host <udoo>, CWD </opt/ibm/lsf/conf>, 4 Pro
Sun Feb 2 23:49:48: Started on 4 Hosts/Processors <udoo> <udoo> <udoo> <udoo>,
Execution Home </home/debian>, Execution CWD </opt/ibm/ls
Sun Feb 2 23:51:05: Resource usage collected.
The CPU time used is 227 seconds.
MEM: 140 Mbytes; SWAP: 455 Mbytes; NTHREAD: 8
PGID: 15678; PIDs: 15678 15679 15681 15682 15683 15684
Here we could speak of GFlops, and other such measures of performance, but that was not my objective. The key, is that there is a growing interest in non-x86 solutions for Technical Computing. IBM Platform LSF software has supported and continues to support a wide variety of operating systems and processor architectures, from ARM to IBM Power to IBM System z.
As for ARM based development boards such as the Udoo Quad, Parallela Board, etc., they are inexpensive as well as being energy efficient. This fact makes them of interest to HPC scientists looking at possible approaches to energy efficiency for HPC workloads. Let us know your thoughts about the suitability of ARM for HPC workloads.
Modified on by Gabor_Samu
Whether your HPC center is in Lilliput or Blefuscu, you'll appreciate the importance of a flexible and easy-to-use cluster management solution to empower your populations. Administrators need software that will allow them to easily setup, manage, monitor and maintain their infrastructure and ensure consistency for repeatable performance. With the varied workloads we see in modern HPC centers, ranging from traditional HPC to Big Data and Analytics, organizations may also consider building out heterogeneous environments, where different hardware types are used for different workloads. As the OpenPOWER Foundation grows and stresses the overall importance of workflows across the HPC Data Life Cycle - it's clear that when it comes to solutions for technical computing, it's no longer a one horse race.
IBM Platform Cluster Manager is powerful, easy-to-use infrastructure management for today’s scale out computing needs. The latest release of Platform Cluster Manager V4.2.1 now provides the ability to manage mixed computing environments - so whether you're running Linux on POWER Big-Endian or Little-Endian, the choice is yours. In fact, you can even build out and seamlessly manage a mixed infrastructure taking advantage of the latest IBM POWER8 and x86 systems.
Leveraging xCAT technology, Platform Cluster Manager can manage clusters ranging from 'Lilliputian' in size all the way up to 2500 nodes. Platform Cluster Manager Advanced Edition supports the automated creation of multiple clusters on a shared infrastructure - allowing you to easily satisfy the business requirements of Lilliputians and Blufescans. For organizations with a single HPC cluster, Platform Cluster Manager Standard Edition provides the ability to quickly provision, run, manage and monitor a technical computing infrastructure with unprecedented ease.
For users taking advantage of IBM POWER8 systems, Platform Cluster Manager can now provision PowerNV nodes as well as PowerKVM hypervisors, which provides greater flexibility in infrastructure management and optimization. Further enhancements in this release geared towards administrator productivity include IBM POWER8 energy, PowerKVM and enhanced switch monitoring
So go ahead. With Platform Cluster Manager you can crack your eggs any way you like.
Super Computing 2013 has now come to a close. For those of you who were in Denver, we hope that you had the opportunity to visit the IBM booth. Among the many live demonstrations running at the IBM booth, there was a demo of IBM Platform HPC for System x. You can find out more details about the IBM Platform HPC demo at the IBM Platform Computing Community page.
In addition to the demo running live on IBM NeXtScale, there was also a static IBM NeXtScale system on display for people to touch, and see.
The IBM Platform HPC demo featured IBM NeXtScale and the Weather Research and Forecasting Model (WRF) application.
Even though SC13 has just wrapped up, I'm already looking forward next years events.
Modified on by Gabor_Samu
This week we look at another RISC powered notebook, this time from IBM. Although IBM did produce a line of PowerPC based Thinkpad systems, this blog is focused on a little known system called the IBM Workpad z50. This Microsoft Handheld PC form factor system was launched in March 1999 and ran Windows CE at the time. As we’ll see below, with some ingenuity it is also able to run NetBSD, which makes it a much more interesting proposition (at least for me). Ironically, although this is a High-Performance Computing focused blog (HPC), the “HPC” in this case stands for “Handheld PC”.
The Workpad z50 has a form factor smaller than a notebook, but has what I consider to be an excellent keyboard, and of course the trademark Thinkpad trackpoint! Looking more closely at the specifications:
NEC VR4121 MIPS R4100 CPU @ 131 MHz
16 MB System RAM (expandable)
16 MB System ROM
8.4” LCD Display 640x480 (16-bit)
External Monitor connector (SVGA)
What prevents me from taking my pristine Workpad z50 to the local electronics recycling facility is NetBSD. With a little effort it is possible to install recent versions of NetBSD on the Workpad z50 and even have XWindows running. There are a number of sources of information on this topic including some videos on YouTube:
I won’t run through the install procedure here as that’s been well covered already. Rather, let’s look at the boot-up sequence and of course in keeping with the High-Performance Computing theme, run a simple benchmark. Links to the videos follow below:
Using NetBSD pkgsrc, I have setup NetBSD on a x86 based system and have taken advantage of distcc to cross compile binaries. This helps greatly to get packages quickly compiled for the system.
Equipped with PCMCIA, I’m able to easily add to the Workpad z50 such capabilities as Ethernet, Wireless networking and even SCSI.
Next steps? I'll be looking to move to NetBSD 6.x series and compile a more compact kernel (with drivers removed that I don't required). Unlike the system in my previous blog, this one is silent.