CADxReport: Chest x-ray report generation using co-attention mechanism and reinforcement learning.

Researchers

Journal

Modalities

Models

Abstract

Automated generation of radiological reports for different imaging modalities is essentially required to smoothen the clinical workflow and alleviate radiologists’ workload. It involves the careful amalgamation of image processing techniques for medical image interpretation and language generation techniques for report generation. This paper presents CADxReport, a coattention and reinforcement learning based technique for generating clinically accurate reports from chest x-ray (CXR) images.CADxReport, uses VGG19 network pre-trained over ImageNet dataset and a multi-label classifier for extracting visual and semantic features from CXR images, respectively. The co-attention mechanism with both the features is used to generate a context vector, which is then passed to HLSTM for radiological report generation. The model is trained using reinforcement learning to maximize CIDEr rewards. OpenI dataset, having 7, 470 CXRs along with 3, 955 associated structured radiological reports, is used for training and testing.Our proposed model is able to generate clinically accurate reports from CXR images. The quantitative evaluations confirm satisfactory results by achieving the following performance scores: BLEU-1 = 0.577, BLEU-2 = 0.478, BLEU-3 = 0.403, BLEU-4 = 0.346, ROUGE = 0.618 and CIDEr = 0.380.The evaluation using BLEU, ROUGE, and CIDEr score metrics indicates that the proposed model generates sufficiently accurate CXR reports and outperforms most of the state-of-the-art methods for the given task.Copyright © 2022 Elsevier Ltd. All rights reserved.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *