IBM Research-Australia

Bionics in Real Life

Share this post:

Controlling Prosthetic Limbs With Thought and AI

Bionics is no longer the stuff of science fiction. For some amputees, the ability to carry out daily living activities depends on how efficiently and reliably they can bypass broken pathways between the brain and a prosthetic limb. Robust brain-machine interfaces can help restore mobility for these people.

A variety of conditions such as stroke, spinal cord injury, traumatic limb injury and several neuromuscular and neurological diseases can also cause limb failure. In these conditions, the motor cortex is often intact and can potentially benefit from assistive technologies. A durable brain-machine interface has the potential to rehabilitate these people by enabling natural control of limbs through decoding movement intentions from the brain and then translating them into executable actions for robotic actuators connected to the affected limb.

Brain-machine interface

For the first time, we have demonstrated an end-to-end proof-of-concept for such a brain-machine interface by combining custom-developed AI code with exclusively commercially available, low-cost system components. In two peer-reviewed scientific papers presented and published in parallel at the 2018 International Joint Conference on Artificial Intelligence (IJCAI) and the 40th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, we describe how novel deep-learning algorithms can be used to decode activity intentions solely from a take-home scalp electroencephalography (EEG) system and to execute the intended activities by operating an off-the-shelf robotic arm in a real-world environment.


Brain-machine interfaces comprising expensive medical-grade EEG systems run in carefully-controlled environments are impractical for take-home use. Previous studies employed low-cost systems; however, performance measures were suboptimal or inconclusive. We evaluated a low-cost EEG system, the OpenBCI headset, in a natural environment to decode the intentions of test subjects purely by analysing their thought patterns. By running the to-date largest cohort of healthy test subjects in combination with neurofeedback training techniques and deep learning, we show that our AI-based method is more robust than previous studies attempting to decode brain states from OpenBCI data.


Once intended activities were decoded, we translated them into instructions for a robotic arm capable of grasping and positioning objects in a real life environment. We linked the robotic arm to a camera and a custom developed deep learning framework, which we call GraspNet. GraspNet can determine the best positions for the robotic gripper to pick up objects of interest. It is a novel deep learning architecture which outperforms state of the art deep learning models in terms of grasp accuracy with fewer parameters, a memory footprint of only 7.2MB and real time inference speed on an Nvidia Jetson TX1 processor. These attributes make GraspNet an ideal deep learning model for embedded systems that require fast over the air model updates.

In more than 50 experiments, we demonstrated that the entire pipeline from decoding an intended task by analysing a test subject’s thought patterns to executing the intended tasks using a robotic arm can be executed in real time and in a real life environment. We plan to extend the GraspNet architecture for simultaneous object recognition and grasp detection, and to further reduce the overall latency of visual recognition systems, while maintaining a compact model design and real-time inference speed.

Our demonstration marks an important step toward developing robust, low-cost, low-power brain-machine interfaces for controlling artificial limbs through the power of thought. One day, this technology may be used to create a device that can drive prosthetic limbs.

Adjunct Professor, University of Technology Sydney, Faculty of Engineering and Information Technology

More IBM Research-Australia stories

We’ve moved! The IBM Research blog has a new home

In an effort better integrate the IBM Research blog with the IBM Research web experience, we have migrated to a new landing page:

Continue reading

Pushing the boundaries of human-AI interaction at IUI 2021

At the 2021 virtual edition of the ACM International Conference on Intelligent User Interfaces (IUI), researchers at IBM will present five full papers, two workshop papers, and two demos.

Continue reading

From HPC Consortium’s success to National Strategic Computing Reserve

Founded in March 2020 just as the pandemic’s wave was starting to wash over the world, the Consortium has brought together 43 members with supercomputing resources. Private and public enterprises, academia, government and technology companies, many of whom are typically rivals. “It is simply unprecedented,” said Dario Gil, Senior Vice President and Director of IBM Research, one of the founding organizations. “The outcomes we’ve achieved, the lessons we’ve learned, and the next steps we have to pursue are all the result of the collective efforts of these Consortium’s community.” The next step? Creating the National Strategic Computing Reserve to help the world be better prepared for future global emergencies.

Continue reading