Medical Image Captioning Using Optimized Deep Learning Model.

Researchers

Journal

Modalities

Models

Abstract

Medical image captioning provides the visual information of medical images in the form of natural language. It requires an efficient approach to understand and evaluate the similarity between visual and textual elements and to generate a sequence of output words. A novel show, attend, and tell model (ATM) is implemented, which considers a visual attention approach using an encoder-decoder model. But the show, attend, and tell model is sensitive to its initial parameters. Therefore, a Strength Pareto Evolutionary Algorithm-II (SPEA-II) is utilized to optimize the initial parameters of the ATM. Finally, experiments are considered using the benchmark data sets and competitive medical image captioning techniques. Performance analysis shows that the SPEA-II-based ATM performs significantly better as compared to the existing models.Copyright © 2022 Arjun Singh et al.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *