| |

Recognizing Image Semantic Information Through Multi-Feature Fusion and SSAE-Based Deep Network.

Researchers

Journal

Modalities

Models

Abstract

Images are powerful tools with which to convey human emotions, with different images stimulating diverse emotions. Numerous factors affect the emotions stimulated by the image, and many researchers have previously focused on low-level features such as color, texture and so on. Inspired by the successful use of deep convolutional neural networks (CNN) in the visual recognition field, we used a data augmentation method for small data sets to gain the sufficient number of the training dataset. In this paper, we use low-level features (color and texture features) of the image to assist the extraction of advanced features (image object category features and deep emotion features of images), which are automatically learned by deep networks, to obtain more effective image sentiment features. Then, we use the stack sparse auto-encoding network to recognize the emotions evoked by the image. Finally, high-level semantic descriptive phrases including image emotions and objects are output. Our experiments are carried out on the IAPS and GAPED data sets of the dimension space and the artphoto data set of the discrete space. Compared with the traditional manual extraction methods and other existing models, our method is superior to in terms of test performance.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *