IBM researchers introduce accumulation bit-width scaling, addressing a critical need in ultra-low-precision hardware for training deep neural networks.
IBM Research shares new results at SysML that push the envelope for deep learning inference, enabling high accuracy down to 2-bit precision.
The IBM Research AI Hardware Center, a global research hub to develop next-generation AI hardware and help achieve AI's true potential.
IBM scientists show, for the first time, successful training of deep neural networks using 8-bit floating point numbers while fully maintaining accuracy.
IBM researchers propose guidelines for novel analog memory devices to enable fast, energy-efficient and accurate AI hardware accelerators.
A capacitor-based cross-point array for analog neural networks offers potential orders of magnitude improvements in deep learning computations.
IBM scientists developed a digital accelerator core for AI hardware that uses approximate computing to improve compute efficiency.
I recently participated in a panel at Applied Materials’ 2017 Analyst Day to talk about artificial intelligence (AI). Yes, a materials company asked me, an executive overseeing semiconductor research, to join other technologists to give our view of AI – demonstrating how interest in AI has permeated all aspects of the IT industry! To lead […]