AI Hardware

IBM Launches Research Collaboration Center to Drive Next-Generation AI Hardware

The IBM Research AI Hardware Center, a global research hub to develop next-generation AI hardware and help achieve AI's true potential.

Continue reading

8-Bit Precision for Training Deep Learning Systems

IBM scientists show, for the first time, successful training of deep neural networks using 8-bit floating point numbers while fully maintaining accuracy.

Continue reading

Dual 8-Bit Breakthroughs Bring AI to the Edge

IBM researchers showcase new 8-bit breakthroughs in hardware that will take AI further than it’s been before: right to the edge.

Continue reading

Steering Material Scientists to Better Memory Devices

IBM researchers propose guidelines for novel analog memory devices to enable fast, energy-efficient and accurate AI hardware accelerators.

Continue reading

Keep it Simple: Towards Single-Elemental Phase Change Memory

Scientists publish a new approach to phase change memory using only a single chemical element—antimony—in Nature Materials.

Continue reading

Capacitor-Based Architecture for AI Hardware Accelerators

A capacitor-based cross-point array for analog neural networks offers potential orders of magnitude improvements in deep learning computations.

Continue reading

Novel Synaptic Architecture for Brain Inspired Computing

IBM scientists developed an artificial synaptic architecture, a significant step towards large-scale and energy efficient neuromorphic computing technology.

Continue reading

Unlocking the Promise of Approximate Computing for On-Chip AI Acceleration

IBM scientists developed a digital accelerator core for AI hardware that uses approximate computing to improve compute efficiency.

Continue reading

Machine Learning for Analog Accelerators

A machine learning technique for evaluating materials used to make analog accelerators, whose lower power and faster speed can drive deep learning.

Continue reading

We’ve Reached the Summit

Introducing the world’s smartest, most powerful supercomputer In 2014, the US Department of Energy (DoE) kicked off a multi-year collaboration between Oak Ridge National Laboratory (ORNL), Argonne National Laboratory (ANL) and Lawrence Livermore National Laboratory (LLNL) called CORAL, the next major phase in the DoE’s scientific computing roadmap and path to exascale computing. They selected […]

Continue reading

The Future of AI Needs Better Compute: Hardware Accelerators Based on Analog Memory Devices

A machine learning technique for evaluating materials used to make analog accelerators, whose lower power and faster speed can drive deep learning.

Continue reading

IBM Scientists Demonstrate Mixed-Precision In-Memory Computing for the First Time; Hybrid Design for AI Hardware

Today, we are entering the era of cognitive computing, which holds great promise in deriving intelligence and knowledge from huge volumes of data. One of the biggest challenges in using these huge volumes of data is the fundamental design of today’s computers, which are based on the von Neumann architecture, requiring data to be shuttled […]

Continue reading