The IBM Research AI Hardware Center, a global research hub to develop next-generation AI hardware and help achieve AI's true potential.
IBM scientists show, for the first time, successful training of deep neural networks using 8-bit floating point numbers while fully maintaining accuracy.
IBM researchers propose guidelines for novel analog memory devices to enable fast, energy-efficient and accurate AI hardware accelerators.
Scientists publish a new approach to phase change memory using only a single chemical element—antimony—in Nature Materials.
A capacitor-based cross-point array for analog neural networks offers potential orders of magnitude improvements in deep learning computations.
IBM scientists developed an artificial synaptic architecture, a significant step towards large-scale and energy efficient neuromorphic computing technology.
IBM scientists developed a digital accelerator core for AI hardware that uses approximate computing to improve compute efficiency.
Introducing the world’s smartest, most powerful supercomputer In 2014, the US Department of Energy (DoE) kicked off a multi-year collaboration between Oak Ridge National Laboratory (ORNL), Argonne National Laboratory (ANL) and Lawrence Livermore National Laboratory (LLNL) called CORAL, the next major phase in the DoE’s scientific computing roadmap and path to exascale computing. They selected […]
A machine learning technique for evaluating materials used to make analog accelerators, whose lower power and faster speed can drive deep learning.
IBM Scientists Demonstrate Mixed-Precision In-Memory Computing for the First Time; Hybrid Design for AI Hardware
Today, we are entering the era of cognitive computing, which holds great promise in deriving intelligence and knowledge from huge volumes of data. One of the biggest challenges in using these huge volumes of data is the fundamental design of today’s computers, which are based on the von Neumann architecture, requiring data to be shuttled […]