A generative adversarial deep neural network to translate between ocular imaging modalities while maintaining anatomical fidelity.

Researchers

Journal

Modalities

Models

Abstract

Certain ocular imaging procedures such as fluoresceine angiography (FA) are invasive with potential for adverse side effects, while others such as funduscopy are non-invasive and safe for the patient. However, effective diagnosis of ophthalmic conditions requires multiple modalities of data and a potential need for invasive procedures. In this study, we propose a novel conditional generative adversarial network (GAN) capable of simultaneously synthesizing FA images from fundus photographs while predicting retinal degeneration. The proposed system addresses the problem of imaging retinal vasculature in a non-invasive manner while utilizing the cross-modality images to predict the existence of retinal abnormalities. One of the major contributions of the proposed work is the introduction of a semi-supervised approach in training the network to overcome the problem of data dependency from which traditional deep learning architectures suffer. Our experiments confirm that the proposed architecture outperforms state-of-the-art generative networks for image synthesis across imaging modalities. In particular, we show that there is a statistically significant difference (p<.0001) between our method and the state-of-the-art in structural accuracy of the translated images. Moreover, our results confirm that the proposed vision transformers generalize quite well on out-of-distribution data sets for retinal disease prediction, a problem faced by many traditional deep networks.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *