IBM scientists show, for the first time, successful training of deep neural networks using 8-bit floating point numbers while fully maintaining accuracy.
IBM researchers propose guidelines for novel analog memory devices to enable fast, energy-efficient and accurate AI hardware accelerators.
Scientists publish a new approach to phase change memory using only a single chemical element—antimony—in Nature Materials.
A capacitor-based cross-point array for analog neural networks offers potential orders of magnitude improvements in deep learning computations.
IBM scientists developed an artificial synaptic architecture, a significant step towards large-scale and energy efficient neuromorphic computing technology.
IBM scientists present a programmable AI core for multi-domain deep learning training and inference on chip at 2018 VLSI Circuits Symposium.
Analog memory for accurate, faster, lower power neural network training - a major step on the path to hardware accelerators for the next AI breakthroughs.
Appearing today in the peer review journal Nature Electronics, IBM scientists introduce a novel hybrid concept called mixed-precision in-memory computing which combines a von Neumann machine with a computational memory unit....