At NeurIPS 2019, IBM Research continues to advance its 8-bit training platform to improve performance and maintain accuracy for the most challenging emerging deep learning models.
Researchers from the IBM AI Hardware Center will showcase at IEDM and NeurIPS new analog devices, algorithmic and architectural solutions, a novel model training technique, and a full custom design.
After uncovering a new Nasca Line formation with IBM Watson Machine Learning Accelerator on IBM Power Systems, Yamagata University will deploy IBM PAIRS in the hopes of further discoveries with AI.
The fourth-quarter issue of the IBM Journal of Research & Development is dedicated to the exploration and deployment of hardware for AI systems. It contains 10 contributions from leading authorities in the fields that summarize the latest state of the art and share new research results.
At the 2019 VLSI, IBM researchers will present three papers that provide novel solutions to AI computing based on analog devices.
IBM researchers introduce accumulation bit-width scaling, addressing a critical need in ultra-low-precision hardware for training deep neural networks.
IBM Research shares new results at SysML that push the envelope for deep learning inference, enabling high accuracy down to 2-bit precision.
The IBM Research AI Hardware Center, a global research hub to develop next-generation AI hardware and help achieve AI's true potential.
IBM scientists show, for the first time, successful training of deep neural networks using 8-bit floating point numbers while fully maintaining accuracy.
IBM researchers propose guidelines for novel analog memory devices to enable fast, energy-efficient and accurate AI hardware accelerators.