Share this post:
Nandakumar is developing systems that learn to perform intelligent tasks, from recognizing words and images to executing higher cognitive functions such as speech recognition and language translation.
S. R. Nandakumar, a graduate student in electrical engineering, has won a coveted IBM Ph.D. fellowship to support his work on computer systems that mimic the architecture of the human brain. He is currently interning at IBM’s Zurich Lab and we had the chance to ask him a few questions.
Q. Last August IBM scientists published a cover paper in Nature Nanotechnology on building artificial surgeons and synapses. What was your impression of the paper?
S.R. Nandakumar (SRN): When we talk about neuromorphic engineering, we often focus on building plastic scalable synaptic devices. The neurons are by default assumed to be built using standard digital or analog CMOS circuits. However, this multi-transistor design could limit the size of the neural networks that can be built on a chip. Therefore, creating neuron behavior from single devices is a major step in creating powerful compact and energy efficient neuro-processing engines.
Q. We often talk about imitating the brain, how close do you think we can get to the biology? And can the biology eventually be surpassed?
SRN: The challenge of deciphering the architectural features of the human brain is not trivial. We have a limited knowledge of its operation. By mimicking what is known, we can hope to close some of the missing links. The brain has superior information encoding, architecture, and processing capabilities. The chemical reactions in biological cells are much more involved than in our devices. By developing single electronic devices capable of mimicking key functions of neurons and synapses, we are getting closer to building large neural circuits of similar complexity.
Now, to achieve the processing capabilities of the brain, an exact imitation might not be necessary. The core principles of brain computing can be realized using different data representation formats. However, the brain’s way seems the most energy efficient. Nano-scale device based implementations are attempting to close this gap. Nevertheless, I believe it could exceed the biological brain computation capabilities. For example, our brain is bandwidth limited in terms the range of sensory inputs it can process. Engineering can overcome these in hardware.
Q. What applications do you envision for this technology?
SRN: Neuromorphic engineering is, in general, trying to realize the computational power and efficiency of the brain in hardware. With the help of nano-scale device and technology, we might be able to implement the parallel computing paradigms of the human brain, replacing the need for a central server to process data. It could revolutionize the way human interact with machines. The digital assistants in our computing devices could become more intelligent and personalized reducing privacy concerns. Neuro-computing chips implanted/attached in human can make our interactions with machines more seamless. Mobile devices could become our doctors and tutors. Intelligent sensors, as part of IoT, could monitor our environment for us. It could help with efficient utilization of resources and routine data processing tasks. Robots will become omnipresent.
Q. You were an intern at the IBM Zurich Lab. Can you talk about your experience?
SRN: The internship at IBM Zurich was challenging, and motivating. I worked on implementing learning algorithms on a phase change memory (PCM) array platform. It was a good exposure for me to handle the challenges when algorithms meet real physical hardware. At IBM, I was given the freedom to work independently and opportunity to discuss ideas with experts. I had a great supportive mentor, Abu Sebastian, who helped to develop ideas to practical applications. The environment is encouraging and gives you lots of ideas for future research.
Q. Can you provide a brief teaser about what you plan to present at the 2017 Device Research Conference?
SRN: We developed a compact model for PCM pulse programming behavior, which captures its stochastic conductance evolution. We then used it to analyze programming strategies for supervised learning. The results could have an impact on the realization of spike-based computing hardware.