Any discussion on prosthetic arms must inevitably start (or end) with Luke’s hand from The Empire Strikes Back. It was, for any prosthetics researcher or aficionado, the holy grail of the field: a hand that looks like a human hand, moves like a human hand, and responds to stimuli like a human hand. Indeed, in the next movie, it’s almost as if Luke has a human hand after all, no post-processing required. Unfortunately, much like faster-than-light travel and light sabers, such a prosthetic exists today firmly in the realm of fantasy.

Prosthetics is a fascinating field because it is the intersection of so many different disciplines. Ask a dozen researchers why a perfect prosthetic hand doesn’t exist today and you’ll get just as many answers.

The mechanical engineer would say: Have you seen how many different motions the human hand can do? It’d be a miracle to make one that can move like that which doesn’t fall to pieces if you look at it wrong.

The orthopedist would say: Comfortably carrying a prosthetic manipulator 24 hours a day isn’t easy, and in-bone implantation of prosthetics is not yet a solved problem.

The neuroscientist would say: Reproducing spinal reflexes from a prosthetic hand? Ignoring the embodiment issues involved, that requires an interface to the peripheral sensory nerves that perfectly replicates what would have existed in the natural limb.

Then all the scientists would propose totally conflicting ideas for the best way forwards.

These days, however, people are looking more and more towards one particular person to solve the problems that keep prostheses frustrating and often unusable: the machine learning expert. If you just record enough data, the thinking goes, and train the right predictive model, you can learn how to steer the prosthetic limb using the brain. With perfect control, even imperfect sensory input or mechanical ingenuity may be surmountable. And so, in the past decade there has been an explosion in the number of people trying more and more complicated approaches to control a prosthesis with the mind. While the performance metrics in some of these papers keep getting higher and higher, however, even state-of-the-art commercial prostheses use methods that the machine learning community stopped looking at decades ago.  

To get into this mystery a bit more, let’s back up and talk about how a prosthesis works in more detail. Let’s imagine you’ve gotten your arm (halfway between elbow and wrist) cut off in a freak light saber incident. Once the stump heals, you go to your local prosthetics company and ask for a new lower arm. At this point you have a few options: a passive arm, a body-powered arm, and a myoelectrically controlled arm. The passive arm is just that: a realistic replica that hangs on your forearm as a dead weight (or, for something both more sinister and more useful, you could choose a hook). Body-powered means something like a claw that you can open and shut by shrugging your shoulders. Myoelectric is what the machine learning community cares about. It uses the signals from the residual muscles to control the prosthesis. Even though your hand is gone, the muscles that used to control it are all packed into your forearm, and are mostly still around – which means that when you imagine moving your hand (which is no longer there), the muscles of your forearm (which are) still twitch accordingly. Even better, these twitches are linked to the electrical potential recorded on the surface of the skin, so they are easy to read with an electromyogram (EMG).

To a machine learning expert the problem then becomes simple: Muscles twitch in accordance with thoughts. Therefore if a machine can learn a mapping from muscle twitches to intended movements, the problem is solved! The question then is how – if there is no remaining hand – how do I know the subject’s exact intended hand position (called ‘pose’) in order to feed that to my algorithm? At first, people tried something very simple: A single muscle, if it is activated, can twitch somewhere between 0 and 100% of its maximum capacity. Therefore, if one records the EMG over that exact muscle, the rest of the hand’s pose is no longer necessary. This system of proportional control (the activation of the robot hand is proportional to the activation of discrete anatomical muscles, as measured by the EMG) was sufficiently reliable and effective to become the current standard for myoelectric prostheses. However, it comes with downsides: If the electrode shifts a little bit on the skin, then the signal can quickly disappear. Plus, this forces the control to be from the most accessible muscles, which are not necessarily the most intuitive or the most comfortable to activate. Lastly, the human hand is not controlled through isolated muscle contractions, making more complex control (more than two degrees of freedom of the hand) impossible.

To solve these problems, the field looked towards pattern recognition. It’s still too hard to give my algorithm the exact imaginary position of the hand in order to train it, the thinking goes, but if I ask the user to choose a small number of poses – say 8 – that they generate use reliably, I don’t need to be as precise in what I tell the algorithm. All I need to trust is that the user is capable of doing the same imagined pose reliably. While not quite that simple in practice, the idea proved to allow for more reliable control in prosthetics both within and outside the laboratory. While more and more complicated machine learning has been attempted in the research community, however – deep learning, non-negative matrix factorization and other new methods – industry sticks resolutely with the simplest classification strategies. It is here that the difference in metrics comes into play.

For a machine learner, task success is measured by decoding accuracy. If I can decode the correct pose from the EMG 95% of the time, then my classifier is better than one that can only do it 90% of the time. Intuitively, this is also quite reasonable – if a classifier is right more often, it is probably the one you want. However, this is not the metric that an amputee uses to rate the usability of his device, as no movement is ever done in isolation. A real arm is used to perform complex sequences of movements (opening doors, grasping and turning keys) in situations one could never exhaustively test in a lab (trying to open your front door when there’s a grocery bag on your arm and your kid is hanging off your leg screaming for ice cream). As a recent paper by Jiang et al (2014) showed, measurements such as classifier accuracy have almost no correlation with success completing real-world tasks.

Thus, the field stands at an odd crossroads. The machine learners, using their tried-and-true methodology, can report ever increasing accuracies when processing the data like a standard machine learning task. The prosthetic producers and users, however, often find that these larger numbers in academic papers don’t actually translate into less frustrating devices, and turn back to the traditional engineering and neuroscience combination that got them most of the way here. Machine learning now has the task of needing to rework its methodology to fit this new task if it wants to be relevant for this field; if it does not, it’s likely that Luke’s hand will remain, for the moment, where it is – in a galaxy far, far away.

 

Bibliography

Jiang, Ning, et al. “Is accurate mapping of EMG signals on kinematics needed for precise online myoelectric control?.” IEEE Transactions on Neural Systems and Rehabilitation Engineering 22.3 (2014): 549-558.

Images are open source from unsplash.com

Vinay Jayaram is a PhD student affiliated with the Graduate Training Center for Neuroscience and the Max Planck Institute for Intelligent Systems in Tübingen, Germany


0 Comments

Leave a Reply

Your email address will not be published. Required fields are marked *

%d bloggers like this: