Big Data

A Critical Role for Hardware in the Era of Cognitive Business

Share this post:

In the last few months, I’ve witnessed the beginning of a sea change in the way people at the forefront of computer science think about the future of our field. Faculty members at universities are showing a keen interest in cognitive systems. And I’m not talking just about algorithms and software. They want to discuss the processor and system technologies that will support a new generation of applications and a new era of computing.

In my view, this is a key step toward taking cognitive technologies mainstream—a shift I expect to accelerate this year. Academia is a cauldron of experimentation on the leading edge of science and technology. Think about how open source software took hold. Students embraced it and carried it out into the world when they graduated. The same thing will happen now with cognitive technologies.

Computer systems technologies will be critical in this new era. Back when the world shifted from the horse and buggy to the automobile, paved roads were needed to enable people to enjoy the full benefits of the internal combustion engine. So roads were paved and, eventually, the highway was invented. The same will be true in today’s transition from conventional computing to cognitive computing. We need new infrastructures designed for big data and smart machines.

Let’s start with microprocessors. They have driven much of the progress in computing since the 1970s—thanks to the tech industry’s ability to fulfill the promise of Moore’s Law by packing ever more transistors on fingernail-size chips. In recent years, advances in low-power and reduced-instruction-set processors have enabled individual systems and networks of servers to take on increasingly data-heavy tasks.

IBM’s latest Power processors were designed specifically for big data, and they’re well suited for cognitive computing because of their ability to deal with large volumes of unstructured data. They perform 2.5x better than conventional processors when attacking the most demanding computing jobs. That’s primarily because of their superior throughput for transferring data back and forth between memory and the processor, and because of their ability to execute more computing processes concurrently.

Many of the changes that are coming to computing systems are embodied in the next generation of supercomputers for the U.S. Department of Energy. Under the CORAL project, Oak Ridge National Lab’s “Summit” and Lawrence Livermore National Lab’s “Sierra,” will run on IBM’s Power processors and are expected to perform five to seven times faster than the top supercomputers in the United States today. The computers are expected to be delivered in 2017.

When the DoE announced these two CORAL computers, its leaders embraced a new approach to designing computers and data centers that IBM engineers had been advocating for several years—data-centric computing. The idea is based on the recognition that in the era of big data and cognitive systems, it’s too costly in time and money to move all the data that has to be processed to central processing units. Instead, with data-centric computing, processing is spread throughout the system data and storage hierarchy.

In CORAL, the DoE supported another major IBM initiative, which, I believe will also emerge as an essential piece of the infrastructure that supports cognitive computing. That’s the Open Power ecosystem. IBM opened up technology surrounding Power, including processor specifications and firmware, to enable other tech companies to design and build servers and components based on a common architecture. For the CORAL machines, we’re collaborating with NVIDIA and Mellanox, two Open Power participants, to incorporate NVIDIA’s accelerators and Mellanox’s speedy data transfer technologies.

Accelerators are emerging as elements of the computing systems of the future. GPUs such as NVIDIA’s have their roots in gaming on PCs, but have grown up to become key server technologies. They’re being used today to support compute-intensive machine learning applications. Another kind of accelerator, Field Programmable Gate Arrays, or FPGAs, can be reprogrammed after they’re installed in systems. That makes them a good match for improving the performance and energy efficiency of a wide range of data-intensive applications.

Back to CORAL again. The DoE computers will also adopt a new approach called software-defined storage. These technologies enable computing systems to use a combination of disk, flash and in-memory storage most efficiently for data-intensive computing tasks and to tap into storage resources at multiple data centers on the fly when more capacity is needed.

These same technologies, available as Power Systems and Spectrum storage offerings, are driving the next generation of enterprise IT infrastructure. They’re increasing the processing performance of Watson cognitive services by 10x, accelerating genomics applications by 25x, and reducing infrastructure costs for NoSQL stores for unstructured data by 3x.

The current generation of z systems have also been designed to bring cognitive to core enterprise data. Circuit designers, working on the processor at the heart of the z13, dramatically improved the data-crunching ability of the mainframe; making it an excellent platform for Cognitive workloads. Enterprise developers can leverage Apache Spark, Hadoop and cognitive services to integrate insights from new sources of data with insights from core enterprise data without impacting transaction speeds, moving data around or increasing data security risks.

So you can see how today’s system technologies, which were designed with big data and Internet-scale data centers in mind, are being adapted to deal with the even greater demands of cognitive computing. I expect advances in processors, system design, accelerators and storage to come in waves over the next few years.

In addition, the tech industry will need to invent radically new architectures for cognitive computing. One promising area is processors to support neural networks—which are specifically designed to extract patterns from unstructured data extremely quickly and. Another is quantum computing. Scientists at IBM Research, other tech companies, and within academia are producing advances in the field that could result in functioning quantum computers within a decade that process data exponentially faster than today’s machines.

This is an incredibly exciting time in the computer industry. And for a guy, me, who got his start in chips and chip manufacturing and them moved into systems, it’s a great time to be in the hardware end of the business. I can’t wait to see—and play a role in—what happens next.

More stories

Toronto Raptors Use Data-Driven Command Center on Path to NBA Finals

For the first time in their 24 years as an NBA franchise, the Toronto Raptors are in the NBA Finals, squaring off against the Golden State Warriors in the best of seven games series. The Raptors are going up against one of the most talented and accomplished NBA teams. The Warriors are the defending NBA […]

Continue reading

Medicine and the Message: Consumer Engagement and the Power of Nontraditional Data

The journal Health Affairs estimates that the U.S. spends nearly $500 billion annually on pharmaceutical treatments. Focusing on clinical value alone isn’t enough to meet business growth targets — customer experience plays a critical role in long-term brand success. But experience doesn’t occur in isolation: To win trust and boost spending, pharma brands must use […]

Continue reading

How IBM is Using AI to Speed Partner Lead Sharing

Passing quality sales leads from our salespeople to partners is crucial to bolstering a thriving partner ecosystem, and data shows that the faster we can pass these leads on, the greater the chance of closing the deal. According to a 2017 study by IBM’s Chief Analytics Office (CAO), sales leads that are passed within a […]

Continue reading