| |

Multi-Grasp Classification for the Control of Robot Hands Employing Transformers and Lightmyography Signals.

Researchers

Journal

Modalities

Models

Abstract

The increasing use of smart technical devices in our everyday lives has necessitated the use of muscle-machine interfaces (MuMI) that are intuitive and that can facilitate immersive interactions with these devices. The most common method to develop MuMIs is using Electromyography (EMG) based signals. However, due to several drawbacks of EMG-based interfaces, alternative methods to develop MuMI are being explored. In our previous work, we presented a new MuMI called Lightmyography (LMG), which achieved outstanding results compared to a classic EMG-based interface in a five-gesture classification task. In this study, we extend our previous work experimentally validating the efficiency of the LMG armband in classifying thirty-two different gestures from six participants using a deep learning technique called Temporal Multi-Channel Vision Transformers (TMC-ViT). The efficiency of the proposed model was assessed using accuracy. Moreover, two different undersampling techniques are compared. The proposed thirty-two-gesture classifiers achieve accuracies as high as 92%. Finally, we employ the LMG interface in the real-time control of a robotic hand using ten different gestures, successfully reproducing several grasp types from taxonomy grasps presented in the literature.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *