IBM Research-Australia

Bionics in Real Life

Share this post:

Controlling Prosthetic Limbs With Thought and AI

Bionics is no longer the stuff of science fiction. For some amputees, the ability to carry out daily living activities depends on how efficiently and reliably they can bypass broken pathways between the brain and a prosthetic limb. Robust brain-machine interfaces can help restore mobility for these people.

A variety of conditions such as stroke, spinal cord injury, traumatic limb injury and several neuromuscular and neurological diseases can also cause limb failure. In these conditions, the motor cortex is often intact and can potentially benefit from assistive technologies. A durable brain-machine interface has the potential to rehabilitate these people by enabling natural control of limbs through decoding movement intentions from the brain and then translating them into executable actions for robotic actuators connected to the affected limb.

Brain-machine interface

For the first time, we have demonstrated an end-to-end proof-of-concept for such a brain-machine interface by combining custom-developed AI code with exclusively commercially available, low-cost system components. In two peer-reviewed scientific papers presented and published in parallel at the 2018 International Joint Conference on Artificial Intelligence (IJCAI) and the 40th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, we describe how novel deep-learning algorithms can be used to decode activity intentions solely from a take-home scalp electroencephalography (EEG) system and to execute the intended activities by operating an off-the-shelf robotic arm in a real-world environment.

 

Brain-machine interfaces comprising expensive medical-grade EEG systems run in carefully-controlled environments are impractical for take-home use. Previous studies employed low-cost systems; however, performance measures were suboptimal or inconclusive. We evaluated a low-cost EEG system, the OpenBCI headset, in a natural environment to decode the intentions of test subjects purely by analysing their thought patterns. By running the to-date largest cohort of healthy test subjects in combination with neurofeedback training techniques and deep learning, we show that our AI-based method is more robust than previous studies attempting to decode brain states from OpenBCI data.

GraspNet

Once intended activities were decoded, we translated them into instructions for a robotic arm capable of grasping and positioning objects in a real life environment. We linked the robotic arm to a camera and a custom developed deep learning framework, which we call GraspNet. GraspNet can determine the best positions for the robotic gripper to pick up objects of interest. It is a novel deep learning architecture which outperforms state of the art deep learning models in terms of grasp accuracy with fewer parameters, a memory footprint of only 7.2MB and real time inference speed on an Nvidia Jetson TX1 processor. These attributes make GraspNet an ideal deep learning model for embedded systems that require fast over the air model updates.

In more than 50 experiments, we demonstrated that the entire pipeline from decoding an intended task by analysing a test subject’s thought patterns to executing the intended tasks using a robotic arm can be executed in real time and in a real life environment. We plan to extend the GraspNet architecture for simultaneous object recognition and grasp detection, and to further reduce the overall latency of visual recognition systems, while maintaining a compact model design and real-time inference speed.

Our demonstration marks an important step toward developing robust, low-cost, low-power brain-machine interfaces for controlling artificial limbs through the power of thought. One day, this technology may be used to create a device that can drive prosthetic limbs.

Research Scientist, IBM Research

More IBM Research-Australia stories

How the world’s first smartwatch inspired cutting-edge AI 

Between 2000 and 2001, IBM Research made headlines when it launched an internet-enabled designer watch running Linux, an open-source operating system. Dubbed WatchPad, its aim was to demonstrate the capabilities of the then-novel OS for mobile and embedded devices.

Continue reading

Peeking into AI’s ‘black box’ brain — with physics

Our team has developed Physics-informed Neural Networks (PINN) models where physics is integrated into the neural network’s learning process – dramatically boosting the AI’s ability to produce accurate results. Described in our recent paper, PINN models are made to respect physics laws that force boundaries on the results and generate a realistic output.

Continue reading

Who. What. Why. New IBM algorithm models how the order of prior actions impacts events

To address the problem of ordinal impacts, our team at IBM T. J. Watson Research Center has developed OGEMs – or Ordinal Graphical Event Models – new dynamic, probabilistic graphical models for events. These models are part of the broader family of statistical and causal models called graphical event models (GEMs) that represent temporal relations where the dynamics are governed by a multivariate point process.

Continue reading