03/03/2017 | Written by: Frank van der Wal
Categorized: CIO's Corner
Share this post:
One of the things I always felt proud about at IBM were it’s High Performance Computing (HPC) capabilities. In the 00’s IBM was significantly present in that market. The BlueGene supercomputer was famous for it’s sheer compute power. It was a very well designed system, for it’s processors were only running at 700 MHz. The pack was running with processors at twice or more that speed. Still, couldn’t keep up.
Despite the ‘slow’ processor, the BlueGene was in many occasions the fastest piece of hardware on the planet. The engineers understood that not only a fast processor was needed but to get data in and out was equally important.
I do remember IBM’s RoadRunner system as being the first supercomputer ever to break the 1 PetaFLOP barrier, down in 2008. (A PetaFLOPS is 1.000 million Floating Point Operations per Second, which is pretty darn fast). That system was not a BlueGene, by the way, it was powered by the unprecedented CELL processor accompanied by AMD Opteron. To be more precise, 12.960 of the first and 6.480 of the latter. Apart form being the fastest system it was also extremely power-efficient.
As a techie, I enjoyed that field and felt sad that IBM abandoned it, for a good reason, I guess. You could argue over the spin-off of having the fastest computer in the world but it doesn’t generate too much revenue, let alone profit.
Suprised and delighted!
So I was a bit surprised, but yet delighted, when I found out that IBM announced a come-back! This time around not with special systems but with the renowned POWER8 systems. And although the design of the POWER8 chip is extremely suited for HPC, it is not only the processor that counts. The whole system stack, from processors via memory, storage all the way to workload management, needs to be optimized to get an economic cost/performance ratio.
The reason for IBM to re-enter the HPC market must be clear. In a year from now we generate 4.3 ExaByte (4.300 PetaByte) of data daily. We need solid compute power to make any sense of that. What is an interesting and new addition is that we now also start talking about HPDA which stands for High Performance Data Analytics. You might debate on the differences between HPC and HPDA, for me, the first is more the scientific calculations done on for example seismic data, healthcare (protein folding). Whereas HPDA comes into the picture when more and more unstructured data is added. HPDA applications could include cyber-security and fraud detection.
The market opportunity requires a flexible, modern and speedy architecture. The OpenPOWER Foundation and its many collaborating members (which keeps growing) continue to innovate. Providing a range of solutions to solutions to accelerate performance and reduce cost across the entire iterative workflow. From data to analytics to learning to best business outcomes.
So, this yet proud IBMer, is again exited to see what IBM will do in this ever interesting area!