IBM Research-Tokyo
Mimicking Neurons With Math
September 16, 2015 | Written by: IBM Research Editorial Staff
Categorized: IBM Research-Tokyo
Share this post:
Dr. Takayuki Osogami |
Artificial neural networks have long been studied with the hope of achieving a machine with the human capability to learn. Today’s attempts at artificial neural networks are built upon Hebb’s rule, which Dr. Donald Hebb proposed in 1949 as how neurons adjust the strength of their connections. Since Hebb, other “rules” of neural learning have been introduced to refine Hebb’s rule, such as spike-timing dependent plasticity (STDP). All of this helps us understand our brains, but makes developing artificial neurons even more challenging.
A biological neural network is too complex to exactly map into an artificial neural network. But IBM mathematician Takayuki Osogami and his team at IBM Research-Tokyomight have figured it out by developing artificial neurons that mathematically mimic STDP to learn words, images, and even music. Takayuki’s mathematical neurons form a new artificial neural network, creating a dynamic Boltzmann machine (DyBM) that can learn about information from multiple contexts through training.
The team taught seven artificial neurons the word “SCIENCE” (one artificial neuron per bit) in a form of a bitmap image. So, the image of “SCIENCE” becomes:
The “1s” equate to the lines making up the letters, while the “0s” translate to the white space around the letters.
![]() |
Figure 1: The DyBM successfully learns two target sequences and retrieves a particular sequence when an associated cue is presented. (Credit: Scientific Reports) |
More neurons. More memories.
![]() |
Figure 2: The DyBM learned the target sequence of human evolution. (Credit: Scientific Reports) |
Images and text are one thing. But neurons encompass all senses. So, Takayuki’s team put 12 of their artificial neurons to work learning music. Using a simplified version of the German folk song, Ich bin ein Musikante, each neuron was assigned to one of the 12 notes (Fa, So, Ra, Ti, Do, Re, Mi, Fa, So, Ra, Ti, Do). After 900,000 training sessions, they learned the sequential patterns of tones to the point of being able to generate a simplified version of the song.
![]() |
Figure 3: The DyBM learned the target music. (Credit: Scientific Reports) |
The scientific paper Seven neurons memorizingsequences of alphabetical images via spike-timing dependent plasticity by Takayuki Osogami and Makoto Otsuka appears in Scientific Reports of the Nature Publishing Group on September 16, 2015, DOI: 10.1038/SREP14149.
Emerging Leaders: Female scientists driving our global research agenda
As March 8, 2018 marks International Women’s Day, this year’s campaign is a #PressforProgress – focusing on gender parity in the community and in the workplace. Since early days at IBM, we have always been led by Thomas J. Watson Jr.’s famous 1953 memo: “It is the policy of this organization to hire people who […]
IBM scientists demo social simulator
Real life is taking a step closer to The Sims video game series. This week at SuperComputing 17 in Denver, Colorado, the Japan Science and Technology Agency (JST) is introducing series of demos, including new research from IBM scientists in Japan which can simulate social situations such as shopping at the mall or an emergency […]
Bio-inspired machine learning goes open source
Since our bio-inspired machine learning technology “Dynamic Boltzmann Machine (DyBM)” debuted in the fall of 2015, we received many comments on the music demo and human evolution image that we used to show how an artificial neural network learns about different topics in different formats. Many developers expressed interest in using the code to let […]