Wisdom from a brain-inspired computing researcher to the class of 2017

Share this post:

Dr. Dharmendra S. Modha is an IBM Fellow and IBM Chief Scientist for Brain-inspired Computing at IBM Research. Following is a transcript of the keynote speech he delivered to the graduating class of the University of California at San Diego Jacobs School of Engineering on June 17, 2017.

brain-inspired computing

Dr. Dharmendra S. Modha is an IBM Fellow and IBM Chief Scientist for Brain-inspired Computing. (Photo Credit: Hita Bambhania-Modha)

Congratulations class of 2017!

I am honored to share this pivotal day in your life with you, your families, and your friends.

And I want to thank Dean Pisano for inviting me here, as well as the distinguished faculty and my UCSD mentors who have all helped shape who I am today.

As you graduate from the Engineering school, there is a blank canvas in front of you. The space of that canvas is the Earth and its vicinity, and the time of the canvas is your individual life span. On this blank canvas, we engineer, not only, devices, materials, systems, structures, and processes, but we also engineer our own careers and lives, so as to manifest strength, utility, and beauty.

The recipe for success, I believe, is three-fold:

  • First, identify external gradients towards your inner purpose;
  • Second, capitalize on inherent opportunities presented by these gradients using your most authentic self;
  • And, lastly, engineer the means for exploiting these gradients while fighting the chief villain in our lives, namely, the 2nd Law of thermodynamics.

First, let us talk about gradients. A gradient or an imbalance is simply a difference across a distance. For example, think of differences in temperature, pressure, chemical concentration, voltage, incomes, etc. The gradients are the sources of opportunity. When yoda from starwars said, “feel the force,” he meant, “feel the gradients.”

A water wheel converts the energy gradient of water flowing from high to low into useful work. Similarly, our intent is to harness the external gradients that exist in the society and the universe — social, economic, political, technological, physical gradients — to create beneficial structures and to manifest constructive complexity.

Unlocking and harnessing these gradients requires us to apply the infinite and inexhaustible tools of creativity, awareness, and imagination while leading us to discovering and extending the frontiers of mathematics, science, and technology in the process.

In my case, the gradient that led to the notion of brain-inspired computers was the observation that there was a billion-fold disparity between the function, the size, the energy, and the speed of the brain as compared to today’s computers.

Second, let us talk about purpose. Discovery of external frontiers, first and foremost, starts with the internal discovery of our own authentic self.

From this place of inner integrity, we pick problems of universal importance and establish audacious goals to solve them while matching these goals to our specific individual gifts.

We then work backwards from the end goals and chart a course to achieve these goals. As facts change, we never compromise on the destination, but continually revise the path. In any situation, we do not react, but rather we consciously act because there is always room for creative response.

In every moment, in every interaction, in every relationship, we bring all the positivity of our entire existence to bear — and then we do it again and again and again.

To truly win, we put not just our skin in the game. Rather, we put our soul in the game.

While it’s important to strive to succeed at work, it’s equally important to maintain work-life balance and choose an inner state of happiness despite life’s paradoxes and challenges. And, regardless of success or failure, we win, personally, by finishing what we start. By graduating, all of you have demonstrated that you are winners!

Lastly, let us talk about the villain. The 2nd Law of thermodynamics essentially says that if a hot room is connected to a cold room then over time the temperature difference will disappear. So, the 2nd Law of thermodynamics serves to efface all gradients over time leaving increased entropy, random motion, chaos, and disorder behind.

Left to its own un-engineered devices, the 2nd Law will produce only heat and waste. It is not possible, regrettably, to fight or defy the second law at a global, macroscopic level, but within the confines of local space and time it is indeed possible to engineer means by which gradients produce useful work. Some engineer had to purposefully do the hard work of inventing and perfecting the waterwheel to exploit the potential energy of water that otherwise would have remained stagnant. The 2nd Law will have its way eventually.

The waterwheel, for example, requires maintenance to keep running and, ultimately, will decay and descend into ruin. But while it lasts, it will enhance human life and perhaps serve as a step stone to greater progress. This is the eternal essence of engineering. This is why fighting the 2nd law is so worthwhile. The ring, in my mind, symbolizes our resolve to courageously stand up to the 2nd Law in all its manifestations.

So, in conclusion, next time that we meet the 2nd Law of thermodynamics, let us rub our magic rings, and let us look at the 2nd Law in the eye, and say, not today, my friend, you are dealing with a graduate of the UCSD’s Jacobs School of Engineering.

Congratulations again, my friends, and I wish you the very best of luck.

Thank you!










More AI stories

Label Set Operations (LaSO) Networks for Multi-Label Few-Shot Learning

Data augmentation is one of the leading methods to tackle the problem of few-shot learning, but current synthesis approaches only address the scenario of a single label per image, when in reality real life images may contain multiple objects. The IBM team came up with a novel technique for synthesizing samples with multiple labels.

Continue reading

RepMet: Representative-Based Metric Learning for Classification and Few-Shot Object Detection

Deep neural networks have demonstrated good results for few-shot learning. However, very few works have investigated the problem of few-shot object detection. A team of IBM researchers developed a novel approach for Distance Metric Learning (DML).

Continue reading

SpotTune: Transfer Learning through Adaptive Fine-Tuning

New techniques make fine tuning an AI model more efficient when doing transfer learning

Continue reading