Posted in: Cognitive Computing, IBM Research-Almaden

Deep learning inference possible in embedded systems thanks to TrueNorth

Scientists at IBM Research – Almaden have demonstrated that the TrueNorth brain-inspired computer chip, with its 1 million neurons and 256 million synapses, can efficiently implement inference with deep networks that approach state-of-the-art classification accuracy on several vision and speech datasets. This will open up the possibilities of embedding intelligence in the entire computing stack from the Internet of Things, to smartphones, to robotics, to cars, to cloud computing, and even supercomputing.

TrueNorth data set samples.

TrueNorth data set samples.

The novel architecture of the TrueNorth processor can classify image data at between 1,200 and 2,600 frames per second while using a mere 25 to 275 mW, which is effectively greater than 6,000 fps per Watt. Like that kung fu master in the movies who simultaneously fights assaults from many opponents, this processor can detect patterns in real time from 50-100 cameras at once – each with 32×32 color pixels and streaming information at the standard TV rate of 24 fps – while running on a smartphone battery for days without recharging.

The breakthrough was published this week in the peer-reviewed Proceedings of the National Academy of Sciences (PNAS). The essence of the innovation was a new algorithm for training deep networks to run efficiently on a neuromorphic architecture, such as TrueNorth, by using 1-bit neural spikes, low-precision synapses, and constrained block-wise connectivity—a task that was previously thought to be difficult, if not, impossible. The neuromorphic deep networks can be specified and trained with the same ease-of-use as contemporary deep learning systems such as MatConvNet, thus removing the need for data scientists to learn the intricacies of TrueNorth’s architecture.

“The goal of brain-inspired computing is to deliver a scalable neural network substrate while approaching fundamental limits of time, space, and energy,” said IBM Fellow Dharmendra Modha, chief scientist, Brain-inspired Computing, IBM Research.

“The new milestone provides a palpable proof-of-concept that the efficiency of brain-inspired computing can be merged with the effectiveness of deep learning, paving the path towards a new generation of cognitive computing spanning mobile, cloud, and supercomputers.”

IBM Fellow Dharmendra Modha with the NS16e

IBM Fellow Dharmendra Modha with the NS16e

In March of this year, IBM demonstrated and delivered a scale-up neuromorphic system—the NS16e, which consists of a 16-chip array of TrueNorth processors—to the Department of Energy’s Lawrence Livermore National Laboratory (LLNL); this configuration is designed for efficiently running large-scale networks that do not fit on a single TrueNorth chip. The NS16e System interconnects TrueNorth chips via a built-in chip-to-chip message-passing interface that does not require additional circuitry or firmware, significantly reducing latency and energy of inter-chip communication.

LLNL is investigating classification and recognition uses of the system for embedded, low-power applications and for high-speed supercomputing applications. Specifically, LLNL is investigating how to use the system to detect cars in overhead imagery with context, clutter, and occlusion; to detect defects in additive manufacturing; and to supervise complex dynamical simulations, like physics problems in industrial design, to avoid failures.

Today, the TrueNorth development ecosystem includes not only the TrueNorth brain-inspired processor, the novel algorithm for training deep networks and the scaled-up NS16e System but also a simulator, a programming language, an integrated programming environment, a library of algorithms and applications, firmware, a teaching curriculum, single-chip boards, and scaled-out systems. The resulting ecosystem is in use by more than 130 users at more than 40 universities, government agencies, and national labs on five continents.

The ecosystem was originally developed under the auspices of Defense Advanced Research Projects Agency’s (DARPA) Systems of Neuromorphic Adaptive Plastic Scalable Electronics (SyNAPSE) program in collaboration with Cornell University. IBM has been holding a series of training events (at Telluride 2015, at IBM Almaden in August 2015, at Telluride 2016, and at IBM Research – Almaden in May 2016) to make the technology widely available. For more information and regular updates, Follow Modha’s blog.

Comments

  1. Claudio Tsuchida says:

    This is most impressive. I had never imagined that processor technology had reached this level of advancement.

Caroline Vespi, Media Relations Lead, IBM Research-Almaden

Caroline Vespi

Media Relations Lead, IBM Research - Almaden