| |

A Hybrid Time-Distributed Deep Neural Architecture for Speech Emotion Recognition.

Researchers

Journal

Modalities

Models

Abstract

In recent years, speech emotion recognition (SER) has emerged as one of the most active human-machine interaction research areas. Innovative electronic devices, services and applications are increasingly aiming to check the user emotional state either to issue alerts under some predefined conditions or to adapt the system responses to the user emotions. Voice expression is a very rich and noninvasive source of information for emotion assessment. This paper presents a novel SER approach based on that is a hybrid of a time-distributed convolutional neural network (TD-CNN) and a long short-term memory (LSTM) network. Mel-frequency log-power spectrograms (MFLPSs) extracted from audio recordings are parsed by a sliding window that selects the input for the TD-CNN. The TD-CNN transforms the input image data into a sequence of high-level features that are feed to the LSTM, which carries out the overall signal interpretation. In order to reduce overfitting, the MFLPS representation allows innovative image data augmentation techniques that have no immediate equivalent on the original audio signal. Validation of the proposed hybrid architecture achieves an average recognition accuracy of 73.98% on the most widely and hardest publicly distributed database for SER benchmarking. A permutation test confirms that this result is significantly different from random classification ([Formula: see text]). The proposed architecture outperforms state-of-the-art deep learning models as well as conventional machine learning techniques evaluated on the same database trying to identify the same number of emotions.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *