AI Hardware

Steering Material Scientists to Better Memory Devices

Share this post:

Ideally, next-generation AI technologies should understand all our requests and commands, extracting them from a huge background of irrelevant information, in order to rapidly provide relevant answers and solutions to our everyday needs. Making these “smart” AI technologies pervasive—in our smartphones, our homes, and our cars—will require energy-efficient AI hardware, which we at IBM Research plan to build around novel and highly capable analog memory devices.

In a recent paper published in Journal of Applied Physics, our IBM Research AI team established a detailed set of guidelines that emerging nano-scaled analog memory devices will need to satisfy in order to enable such energy-efficient AI hardware accelerators.

We had previously shown, in a Nature paper published in June 2018, that training a neural network using highly parallel computation within dense arrays of memory devices such as phase-change memory is faster and consumes less power than using a graphics processing unit (GPU).

Graphical representation of a crossbar array, where different memory devices serve in different roles

Graphical representation of a crossbar array, where different memory devices serve in different roles

The advantage of our approach comes from implementing each neural network weight with multiple devices, each serving in a different role. Some devices are mainly tasked with memorizing long-term information. Other devices are updated very rapidly, changing as training images (such as pictures of trees, cats, ships, etc.) are shown, and then occasionally transferring their learning to the long-term information devices. Although we introduced this concept in our Nature paper using existing devices (phase change memory and conventional capacitors), we felt there should be an opportunity for new memory devices to perform even better, if we could just identify the requirements for these devices.

In our follow-up paper, just published in Journal of Applied Physics, we were able to quantify the device properties that these “long-term information” and “fast-update” devices would need to exhibit. Because our scheme divides tasks across the two categories of devices, these device requirements are much less stringent—and thus much more achievable—than before.  Our work provides a clear path for material scientists to develop novel devices for energy-efficient AI hardware accelerators based on analog memory.

The team (L-R): Sidney Tsai, Geoffrey Burr, Bob Shelby, Pritish Narayanan, Stefano Ambrogio


Perspective on training fully connected networks with resistive memories: Device requirements for multiple conductances of varying significance. Giorgio Cristiano, Massimo Giordano, Stefano Ambrogio, Louis P. Romero, Christina Cheng, Pritish Narayanan, Hsinyu Tsai, Robert M. Shelby, and Geoffrey W. Burr.  Journal of Applied Physics 124, 151901 (2018). doi:10.1063/1.5042462

Research Staff Member, IBM Research

More AI Hardware stories

IBM Launches Research Collaboration Center to Drive Next-Generation AI Hardware

The IBM Research AI Hardware Center, a global research hub to develop next-generation AI hardware and help achieve AI's true potential.

Continue reading

8-Bit Precision for Training Deep Learning Systems

IBM scientists show, for the first time, successful training of deep neural networks using 8-bit floating point numbers while fully maintaining accuracy.

Continue reading

Dual 8-Bit Breakthroughs Bring AI to the Edge

IBM researchers showcase new 8-bit breakthroughs in hardware that will take AI further than it’s been before: right to the edge.

Continue reading