At the 2019 VLSI, IBM researchers will present three papers that provide novel solutions to AI computing based on analog devices.
IBM researchers introduce accumulation bit-width scaling, addressing a critical need in ultra-low-precision hardware for training deep neural networks.
The IBM Research AI Hardware Center, a global research hub to develop next-generation AI hardware and help achieve AI's true potential.
IBM scientists show, for the first time, successful training of deep neural networks using 8-bit floating point numbers while fully maintaining accuracy.
IBM researchers propose guidelines for novel analog memory devices to enable fast, energy-efficient and accurate AI hardware accelerators.
Scientists publish a new approach to phase change memory using only a single chemical element—antimony—in Nature Materials.
A capacitor-based cross-point array for analog neural networks offers potential orders of magnitude improvements in deep learning computations.
IBM scientists developed an artificial synaptic architecture, a significant step towards large-scale and energy efficient neuromorphic computing technology.
IBM scientists developed a digital accelerator core for AI hardware that uses approximate computing to improve compute efficiency.