AI Hardware

Ultra-Low-Precision Training of Deep Neural Networks

IBM researchers introduce accumulation bit-width scaling, addressing a critical need in ultra-low-precision hardware for training deep neural networks.

Continue reading

Highly Accurate Deep Learning Inference with 2-bit Precision

IBM Research shares new results at SysML that push the envelope for deep learning inference, enabling high accuracy down to 2-bit precision.

Continue reading

IBM Launches Research Collaboration Center to Drive Next-Generation AI Hardware

The IBM Research AI Hardware Center, a global research hub to develop next-generation AI hardware and help achieve AI's true potential.

Continue reading

8-Bit Precision for Training Deep Learning Systems

IBM scientists show, for the first time, successful training of deep neural networks using 8-bit floating point numbers while fully maintaining accuracy.

Continue reading

Dual 8-Bit Breakthroughs Bring AI to the Edge

IBM researchers showcase new 8-bit breakthroughs in hardware that will take AI further than it’s been before: right to the edge.

Continue reading

Steering Material Scientists to Better Memory Devices

IBM researchers propose guidelines for novel analog memory devices to enable fast, energy-efficient and accurate AI hardware accelerators.

Continue reading

Keep it Simple: Towards Single-Elemental Phase Change Memory

Scientists publish a new approach to phase change memory using only a single chemical element—antimony—in Nature Materials.

Continue reading

Capacitor-Based Architecture for AI Hardware Accelerators

A capacitor-based cross-point array for analog neural networks offers potential orders of magnitude improvements in deep learning computations.

Continue reading

Novel Synaptic Architecture for Brain Inspired Computing

IBM scientists developed an artificial synaptic architecture, a significant step towards large-scale and energy efficient neuromorphic computing technology.

Continue reading

Unlocking the Promise of Approximate Computing for On-Chip AI Acceleration

IBM scientists developed a digital accelerator core for AI hardware that uses approximate computing to improve compute efficiency.

Continue reading

Machine Learning for Analog Accelerators

A machine learning technique for evaluating materials used to make analog accelerators, whose lower power and faster speed can drive deep learning.

Continue reading

We’ve Reached the Summit

Introducing the world’s smartest, most powerful supercomputer In 2014, the US Department of Energy (DoE) kicked off a multi-year collaboration between Oak Ridge National Laboratory (ORNL), Argonne National Laboratory (ANL) and Lawrence Livermore National Laboratory (LLNL) called CORAL, the next major phase in the DoE’s scientific computing roadmap and path to exascale computing. They selected […]

Continue reading