Naigang Wang

Ultra-Low-Precision Training of Deep Neural Networks

IBM researchers introduce accumulation bit-width scaling, addressing a critical need in ultra-low-precision hardware for training deep neural networks.

Continue reading

8-Bit Precision for Training Deep Learning Systems

IBM scientists show, for the first time, successful training of deep neural networks using 8-bit floating point numbers while fully maintaining accuracy.

Continue reading