Since our bio-inspired machine learning technology “Dynamic Boltzmann Machine (DyBM)” debuted in the fall of 2015, we received many comments on the music demo and human evolution image that we used to show how an artificial neural network learns about different topics in different formats. Many developers expressed interest in using the code to let DyBM learn other music or animation. This request for more “openness” made us wonder if we should dramatically change DyBM’s bio-oriented design.
It learns patterns like neurons: at each moment of a song or an image, DyBM adjusts its internal parameters. The more data fed into DyBM, the better it will master what it’s trying to understand. For example, one of our first music experiments two years ago let its neurons learn a simplified version of the German folk song, “Ich bin ein Musikante”, which was short, simple, with a small variation of chords.
In the early image example, DyBM successfully learned how to correctly order the pieces of an image of human evolution, from apes to Homo sapiens. It took just 19 seconds for the system’s 20 neurons to learn the correct sequence of the image, as mapped out below:
To take neuron learning to a higher level, we added elements such as hidden neurons to DyBM. Hidden neurons exist in the layer between input and output layers. The neurons in the input layer receive patterns from the environment, and the neurons in the output layer give patterns such as predictions. The hidden neurons help learn complex relation between input and output. More specifically, they generate features from input in a way that those features help better predict the future. With these additional elements, it can now better learn rules to generate complex, longer songs using a variety of chords and rhythm, like classical music played by an orchestra.
DyBM goes open source
So others could experiment with, and advance our research work on bio-inspired DyBM, we actually removed some of the biological constraints and rewrote the code in matrix operations with Python to improve its learning speed. It allows developers to turn the source code into sustainable applications that solve real business issues, such as anomaly detection and prediction in the world of IoT.
Just recently, we introduced this library on GitHub. Now, developers can use all code resulting from our research on DyBM, and anyone can teach DyBM to learn classical music, or decipher animation.
Users can try the code we recently introduced, or new functional DyBM code that turns IoT into the internet of neurons, as well as a new DyBM with “hidden neurons” that significantly improves DyBM prediction precision and reduces errors.
By applying so called bi-directional learning, which understands complex temporal sequences by learning the relationship between the past and the future, as well as in reverse, significantly reducing prediction error by up to 90 percent.
Furthermore, developers will find code for the nonlinear DyBM model that we developed, which achieves significant improvement upon state-of-the-art baseline methods like vector autoregressive (VAR) and long short-term memory (LSTM) networks, at a reduced computational cost of online learning.
Our IBM mathematicians are looking forward to collaborating with other developers through this library on GitHub to explore applications of DyBM, and discover how we can make better use of DyBM in the world of IoT.