Multi-Scale Spatiotemporal Attention Network for Neuron based Motor Imagery EEG Classification.

Researchers

Journal

Modalities

Models

Abstract

In recent times, the expeditious expansion of Brain-Computer Interface (BCI) technology in neuroscience, which relies on electroencephalogram (EEG) signals associated with motor imagery, has yielded outcomes that rival conventional approaches, notably due to the triumph of deep learning. Nevertheless, the task of developing and training a comprehensive network to extract the underlying characteristics of motor imagining EEG data continues to pose challenges.This paper presents a multi-scale spatiotemporal self-attention (SA) network model that relies on an attention mechanism. This model aims to classify motor imagination EEG signals into four classes (left hand, right hand, foot, tongue/rest) by considering the temporal and spatial properties of EEG. It is employed to autonomously allocate greater weights to channels linked to motor activity and lesser weights to channels not related to movement, thus choosing the most suitable channels. Neuron utilises parallel multi-scale Temporal Convolutional Network (TCN) layers to extract feature information in the temporal domain at various scales, effectively eliminating temporal domain noise.The suggested model achieves accuracies of 79.26%, 85.90%, and 96.96% on the BCI competition datasets IV-2a, IV-2b, and HGD, respectively.In terms of single-subject classification accuracy, this strategy demonstrates superior performance compared to existing methods.The results indicate that the proposed strategy exhibits favourable performance, resilience, and transfer learning capabilities.Copyright © 2024 Elsevier B.V. All rights reserved.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *