|

Automatic lung disease classification from the chest X-ray images using hybrid deep learning algorithm.

Researchers

Journal

Modalities

Models

Abstract

The chest X-ray images provide vital information about the congestion cost-effectively. We propose a novel Hybrid Deep Learning Algorithm (HDLA) framework for automatic lung disease classification from chest X-ray images. The model consists of steps including pre-processing of chest X-ray images, automatic feature extraction, and detection. In a pre-processing step, our goal is to improve the quality of raw chest X-ray images using the combination of optimal filtering without data loss. The robust Convolutional Neural Network (CNN) is proposed using the pre-trained model for automatic lung feature extraction. We employed the 2D CNN model for the optimum feature extraction in minimum time and space requirements. The proposed 2D CNN model ensures robust feature learning with highly efficient 1D feature estimation from the input pre-processed image. As the extracted 1D features have suffered from significant scale variations, we optimized them using min-max scaling. We classify the CNN features using the different machine learning classifiers such as AdaBoost, Support Vector Machine (SVM), Random Forest (RM), Backpropagation Neural Network (BNN), and Deep Neural Network (DNN). The experimental results claim that the proposed model improves the overall accuracy by 3.1% and reduces the computational complexity by 16.91% compared to state-of-the-art methods.© The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2023, Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *