|

Deep Motion Analysis for Epileptic Seizure Classification.

Researchers

Journal

Modalities

Models

Abstract

Visual motion clues such as facial expression and pose are natural semiology features which an epileptologist observes to identify epileptic seizures. However, these cues have not been effectively exploited for automatic detection due to the diverse variations in seizure appearance within and between patients. Here we present a multi-modal analysis approach to quantitatively classify patients with mesial temporal lobe (MTLE) and extra-temporal lobe (ETLE) epilepsy, relying on the fusion of facial expressions and pose dynamics. We propose a new deep learning approach that leverages recent advances in Convolutional Neural Networks (CNN) and Long Short-Term Memory (LSTM) networks to automatically extract spatiotemporal features from facial and pose semiology using recorded videos. A video dataset from 12 patients with MTLE and 6 patients with ETLEin an Australian hospital has been collected for experiments. Our experiments show that facial semiology and body movements can be effectively recognized and tracked, and that they provide useful evidence to identify the type of epilepsy. A multi-fold cross-validation of the fusion model exhibited an average test accuracy of 92.10%, while a leave-one-subject-out cross-validation scheme, which is the first in the literature, achieves an accuracy of 58.49%. The proposed approach is capable of modelling semiology features which effectively discriminate between seizures arising from temporal and extra-temporal brain areas. Our approach can be used as a virtual assistant, which will save time, improve patient safety and provide objective clinical analysis to assist with clinical decision making.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *