| |

A transfer learning-based multimodal neural network combining metadata and multiple medical images for glaucoma type diagnosis.

Researchers

Journal

Modalities

Models

Abstract

Glaucoma is an acquired optic neuropathy, which can lead to irreversible vision loss. Deep learning(DL), especially convolutional neural networks(CNN), has achieved considerable success in the field of medical image recognition due to the availability of large-scale annotated datasets and CNNs. However, obtaining fully annotated datasets like ImageNet in the medical field is still a challenge. Meanwhile, single-modal approaches remain both unreliable and inaccurate due to the diversity of glaucoma disease types and the complexity of symptoms. In this paper, a new multimodal dataset for glaucoma is constructed and a new multimodal neural network for glaucoma diagnosis and classification (GMNNnet) is proposed aiming to address both of these issues. Specifically, the dataset includes the five most important types of glaucoma labels, electronic medical records and four kinds of high-resolution medical images. The structure of GMNNnet consists of three branches. Branch 1 consisting of convolutional, cyclic and transposition layers processes patient metadata, branch 2 uses Unet to extract features from glaucoma segmentation based on domain knowledge, and branch 3 uses ResFormer to directly process glaucoma medical images.Branch one and branch two are mixed together and then processed by the Catboost classifier. We introduce a gradient-weighted class activation mapping (Grad-GAM) method to increase the interpretability of the model and a transfer learning method for the case of insufficient training data,i.e.,fine-tuning CNN models pre-trained from natural image dataset to medical image tasks. The results show that GMNNnet can better present the high-dimensional information of glaucoma and achieves excellent performance under multimodal data.© 2023. The Author(s).

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *