February 7, 2019 | Written by: Mukesh Khare
Categorized: AI | AI Hardware
Share this post:
Artificial intelligence has the potential to solve some of science and industry’s most vexing challenges. But for that to happen, it needs a new generation of computer systems. Today, AI’s ever increasing sophistication is pushing the boundaries of the industry’s existing hardware systems as users find more ways to incorporate various sources of data from the edge, internet of things, and more. In the continued pursuit of more advanced hardware for the AI era, IBM is working across its Systems, Research and Watson divisions to take a fresh approach to AI, which requires significant changes in the fundamentals of systems and computing design.
To help achieve AI’s true potential, IBM, with support from New York State (NYS), SUNY Polytechnic Institute, and the founding partnership members, today announced an ambitious plan to create a global research hub to develop next-generation AI hardware and expand their joint research efforts in nanotechnology. The IBM Research AI Hardware Center will be the nucleus of a new ecosystem of research and commercial partners collaborating with IBM researchers to further accelerate the development of AI-optimized hardware innovations.
Partnerships within an open ecosystem are key to advancing hardware and software innovation that are the foundation of AI. New IBM Research AI Hardware Center partnerships announced today will aid in those continuing efforts. Samsung is a strategic IBM partner in both manufacturing and research. Synopsys is the leader in software platforms, emulation and prototyping solutions, and IP for developing the high-performance silicon chips and secure software applications that are driving advancements in AI.
Partnerships with leading semiconductor equipment companies Applied Materials and Tokyo Electron Limited (TEL) are crucial to the successful introduction of disruptive materials and devices to fuel our AI hardware roadmap.
We are advancing plans with our SUNY Polytechnic Institute host in Albany, New York, to provide expanded infrastructure support and academic collaborations, and with neighboring Rensselaer Polytechnic Institute (RPI) Center for Computational Innovations (CCI) for academic collaborations in AI and computation. Working through the Center, IBM and its partners will advance a range of technologies from chip level devices, materials, and architecture, to the software supporting AI workloads.
Today’s systems have achieved improved AI performance by infusing machine-learning capabilities with high-bandwidth CPUs and GPUs, specialized AI accelerators and high-performance networking equipment. To maintain this trajectory, new thinking is needed to accelerate AI performance scaling to match to ever-expanding AI workload complexities.
The IBM Research AI Hardware Center will enable IBM and its partner ecosystem to overcome current machine-learning limitations through approaches that include approximate computing through our Digital AI Cores and in-memory computing through our Analog AI Cores. These technologies will help pave a path to 1,000x AI performance efficiency improvement over the next decade, as shown in Figure 1.
Figure 1: IBM Research AI Hardware Center is developing a roadmap for 1,000x improvement in AI compute performance efficiency over the next decade, with a pipeline of Digital AI Cores and Analog AI Cores.
The Center will host research and development, emulation, prototyping, testing, and simulation activities for new AI cores specially designed for training and deploying advanced AI models, including a test bed in which members can demonstrate Center innovations in real-world applications. Specialized wafer processing for the IBM Research AI Hardware Center will be done in Albany, with some support at IBM’s Thomas J. Watson Research Center in Yorktown Heights, N.Y.
Figure 2: Our analog AI cores are part of an in-memory computing approach in performance efficiency which improves by suppressing the so-called Von Neuman bottleneck by eliminating data transfer to and from memory. Deep neural networks are mapped to analog cross point arrays and new non-volatile material characteristics are toggled to store network parameters in the cross points. Learn how this technology works in our live interactive demo.
Design, development, and optimization of next-generation AI processors requires a cross-disciplinary approach that leverages the unique strengths of different organizations. The IBM Research AI Hardware Center will expand the existing IBM and NYS network of semiconductor companies by hosting a collaboration hub of industry partners that includes fabless companies, semiconductor manufacturers, AI practitioners, and consumers. Partner organizations throughout the state will work with IBM to evolve AI from its current ability to perform specific, narrowly-defined tasks, to new capabilities that can solve a broader array of complex problems.
Figure 3: A chip comprising several Analog AI devices used for in-memory computing.
A key area of research and development will be systems that meet the demands of deep learning inference and training processes. Such systems offer significant accuracy improvements over more general machine learning for unstructured data. Those intense processing demands will grow exponentially as algorithms become more complex in order to deliver AI systems with increased cognitive abilities. Other research efforts will focus on creating a multi-year roadmap for the development and delivery of specialized accelerator cores and chip architectures that can further improve AI performance.
Hardware, which has played a foundational role in narrow AI’s maturation, will now expand and grow as we drive the next set of innovations. IBM, working with NYS and a broader ecosystem of partners, will pave the way for the next generation of artificial intelligence systems that will improve business and the lives of people all over the world.
Editor’s note: This post was updated in March 2020 to reflect current IBM Research AI Hardware Center partnerships.