Posted in: AI, AI Hardware

Machine Learning for Analog Accelerators

Deep learning engines can derive knowledge from vast stores of data but have an almost unquenchable appetite for compute power that limits their applications. Accelerator chips with lower power requirements and faster speeds are already in the making. One type, called analog accelerators, will reduce the movement of data during computation to deliver exponential gains in computational efficiency. Our IBM Research group recently reported in Nature Communications [1] a machine learning technique to evaluate key parameters of elements used to fabricate these chips. This information will facilitate material characterization and, in turn, the development of accelerators for analog computation, unleashing the full power of deep learning.

Analog computation enables highly energy-efficient computing gains by using new materials with unique properties, such as non-volatile memory (NVM) materials. Tying compute power to the physical properties of the components is part of “The Physics of AI” that is required to drive innovation for computing for AI. The NVM materials act as synapses in cross-point arrays and are used to store the neural network weights. Today’s NVM materials are being mostly explored for storage class memory/embedded memory applications and do not possess either the switching symmetry or the analog-like multiple states required to fulfill the potential of the Resistive Processing Unit (RPU) approach [2] for AI analog accelerators. RPUs, first proposed by IBM Research scientists, can potentially accelerate deep neural network computations by orders of magnitude while using much less power. RPU devices store and update weight values locally, minimizing data movement during computation and fully exploiting the locality and the parallelism of the algorithms.

In our Nature Communications paper [1], we applied a machine learning algorithm called Gaussian process regression (GPR) to extract key device parameters of NVM elements for analog computation. Specifically, we succeeded in precisely separating signal and noise from analog memory elements with non-linear conductance changes for HfO2-based resistive random access memory (ReRAM) and GeSbTe-based phase change memory (PCM). Unlike conventional memory applications, we exploited continuous changes in switching media (e.g., filament configuration for ReRAM, volume of crystalline region for PCM) to achieve incremental conductance changes, which is one of the key requirements for AI analog accelerators. Such incremental switching was controlled by electric pulses on a nanosecond time scale as illustrated in Figure 1.

NVM materials for storing neural network weights

Figure 1 NVM materials for storing neural network weights in crosspoint arrays: Multiple states (the embodiment of neural network weights) can be stored in the NVM-based crosspoint by exploiting (a) controlled change of filament configuration for ReRAM devices and (b) incremental crystallization of PCM materials in response to electric pulses. Symmetry and variability of these phenomena will directly impact the accuracy and stability of the stored weight.

When incremental weight updates are performed for analog NVM devices, the magnitude of conductance change approaches the level of inherent randomness of the materials. Therefore, the signal is hidden by the intrinsic noise of the material. We used the GPR-based methodology to separate the noise from the signal. This will accelerate materials characterization and development significantly.  We note that this is accomplished without prior knowledge of the device physics and so it can be applied to a wide variety of materials systems.

In a relatively mature technology, our methodology helps find the optimum input signals that provide the best switching symmetry and noise within the tradeoff for the entire system. In an early exploratory phase, our methodology enables extraction of switching symmetry and noise from individual devices and expedites the search for the appropriate materials. Prior to applying the new method, fabrication of many devices was required with tight device-to-device variability for extraction of noise, which is difficult to attain in the early stage when novel material options need to be screened.

Hence, this work provides a platform to characterize novel NVM materials that will exploit “The Physics of AI” for AI hardware innovations.

[1] N. Gong et al., Signal and noise extraction from analog memory elements for neuromorphic computing, Nat. Commun. 9, 2102 (2018).

[2] T. Gokmen and Y. Vlasov, Acceleration of deep neural network training with resistive cross-point devices: design considerations, Front. Neurosci. 10, 333 (2016).

Takashi Ando and Vijay Narayanan

Principal Research Staff Member and IBM Fellow and Manager, IBM Research