Walking Imagery Evaluation in Brain Computer Interfaces via a Multi-View Multi-Level Deep Polynomial Network.

Researchers

Journal

Modalities

Models

Abstract

Brain-computer interfaces based on motor imagery (MI) have been widely used to support the rehabilitation of motor functions of the upper limbs rather than lower limbs. This is probably because it is more difficult to detect the brain activities of lower limb MI. In order to reliably detect the brain activities of lower limbs to restore or improve the walking ability of the disabled, we propose a new paradigm of walking imagery (WI) in a virtual environment (VE), in order to elicit the reliable brain activities and achieve a significant training effect. First, we extract and fuse both the spatial and time-frequency features as a multi-view feature to represent the patterns in the brain activity. Second, we design a multi-view multi-level deep polynomial network (MMDPN) to explore the complementarity among the features so as to improve the detection of walking from an idle state. Our extensive experimental results show that the VE-based paradigm significantly performs better than the traditional text-based paradigm. In addition, the VE-based paradigm can effectively help users to modulate the brain activities and improve the quality of electroencephalography signals. We also observe that the MMDPN outperforms other deep learning methods in terms of classification performance.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *