IBM Research-Australia

Bionics in Real Life

Share this post:

Controlling Prosthetic Limbs With Thought and AI

Bionics is no longer the stuff of science fiction. For some amputees, the ability to carry out daily living activities depends on how efficiently and reliably they can bypass broken pathways between the brain and a prosthetic limb. Robust brain-machine interfaces can help restore mobility for these people.

A variety of conditions such as stroke, spinal cord injury, traumatic limb injury and several neuromuscular and neurological diseases can also cause limb failure. In these conditions, the motor cortex is often intact and can potentially benefit from assistive technologies. A durable brain-machine interface has the potential to rehabilitate these people by enabling natural control of limbs through decoding movement intentions from the brain and then translating them into executable actions for robotic actuators connected to the affected limb.

Brain-machine interface

For the first time, we have demonstrated an end-to-end proof-of-concept for such a brain-machine interface by combining custom-developed AI code with exclusively commercially available, low-cost system components. In two peer-reviewed scientific papers presented and published in parallel at the 2018 International Joint Conference on Artificial Intelligence (IJCAI) and the 40th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, we describe how novel deep-learning algorithms can be used to decode activity intentions solely from a take-home scalp electroencephalography (EEG) system and to execute the intended activities by operating an off-the-shelf robotic arm in a real-world environment.

 

Brain-machine interfaces comprising expensive medical-grade EEG systems run in carefully-controlled environments are impractical for take-home use. Previous studies employed low-cost systems; however, performance measures were suboptimal or inconclusive. We evaluated a low-cost EEG system, the OpenBCI headset, in a natural environment to decode the intentions of test subjects purely by analysing their thought patterns. By running the to-date largest cohort of healthy test subjects in combination with neurofeedback training techniques and deep learning, we show that our AI-based method is more robust than previous studies attempting to decode brain states from OpenBCI data.

GraspNet

Once intended activities were decoded, we translated them into instructions for a robotic arm capable of grasping and positioning objects in a real life environment. We linked the robotic arm to a camera and a custom developed deep learning framework, which we call GraspNet. GraspNet can determine the best positions for the robotic gripper to pick up objects of interest. It is a novel deep learning architecture which outperforms state of the art deep learning models in terms of grasp accuracy with fewer parameters, a memory footprint of only 7.2MB and real time inference speed on an Nvidia Jetson TX1 processor. These attributes make GraspNet an ideal deep learning model for embedded systems that require fast over the air model updates.

In more than 50 experiments, we demonstrated that the entire pipeline from decoding an intended task by analysing a test subject’s thought patterns to executing the intended tasks using a robotic arm can be executed in real time and in a real life environment. We plan to extend the GraspNet architecture for simultaneous object recognition and grasp detection, and to further reduce the overall latency of visual recognition systems, while maintaining a compact model design and real-time inference speed.

Our demonstration marks an important step toward developing robust, low-cost, low-power brain-machine interfaces for controlling artificial limbs through the power of thought. One day, this technology may be used to create a device that can drive prosthetic limbs.

Adjunct Professor, University of Technology Sydney, Faculty of Engineering and Information Technology

More IBM Research-Australia stories

We’ve moved! The IBM Research blog has a new home

In an effort better integrate the IBM Research blog with the IBM Research web experience, we have migrated to a new landing page: https://research.ibm.com/blog

Continue reading

New smartphone app to navigate blind people to stand in lines with distances

We have developed an AI-driven assistive smartphone app dubbed LineChaser, presented at CHI 2021, that navigates a blind or visually impaired person to the end of a line. It also continuously reports the distance and direction to the last person in the line, so that the blind user can follow them easily.

Continue reading

New research helps make AI fairer in decision-making

To tackle bias in AI, our IBM Research team in collaboration with the University of Michigan has developed practical procedures and tools to help machine learning and AI achieve Individual Fairness. The key idea of Individual Fairness is to treat similar individuals well, similarly, to achieve fairness for everyone.

Continue reading