IBM researchers introduce accumulation bit-width scaling, addressing a critical need in ultra-low-precision hardware for training deep neural networks.
IBM Research shares new results at SysML that push the envelope for deep learning inference, enabling high accuracy down to 2-bit precision.
The IBM Research AI Hardware Center, a global research hub to develop next-generation AI hardware and help achieve AI's true potential.
IBM scientists show, for the first time, successful training of deep neural networks using 8-bit floating point numbers while fully maintaining accuracy.
IBM researchers propose guidelines for novel analog memory devices to enable fast, energy-efficient and accurate AI hardware accelerators.
Scientists publish a new approach to phase change memory using only a single chemical element—antimony—in Nature Materials.
A capacitor-based cross-point array for analog neural networks offers potential orders of magnitude improvements in deep learning computations.
IBM scientists developed an artificial synaptic architecture, a significant step towards large-scale and energy efficient neuromorphic computing technology.
IBM scientists developed a digital accelerator core for AI hardware that uses approximate computing to improve compute efficiency.
Introducing the world’s smartest, most powerful supercomputer In 2014, the US Department of Energy (DoE) kicked off a multi-year collaboration between Oak Ridge National Laboratory (ORNL), Argonne National Laboratory (ANL) and Lawrence Livermore National Laboratory (LLNL) called CORAL, the next major phase in the DoE’s scientific computing roadmap and path to exascale computing. They selected […]