LAGAN: Lesion-Aware Generative Adversarial Networks for Edema Area Segmentation in SD-OCT Images.

Researchers

Journal

Modalities

Models

Abstract

Large volume of labeled data is a cornerstone for deep learning (DL) based segmentation methods. Medical images require domain experts to annotate, and full segmentation annotations of large volumes of medical data are difficult, if not impossible, to acquire in practice. Compared with full annotations, image-level labels are multiple orders of magnitude faster and easier to obtain. Image-level labels contain rich information that correlates with the underlying segmentation tasks and should be utilized in modeling segmentation problems. In this paper, we aim to build a robust DL-based lesion segmentation model using only image-level labels (normal v.s. abnormal). Our method consists of three main steps: (1) training an image classifier with image-level labels; (2) utilizing a model visualization tool to generate an object heat map for each training sample according to the trained classifier; (3) based on the generated heat maps (as pseudo-annotations) and an adversarial learning framework, we construct and train an image generator for Edema Area Segmentation (EAS). We name the proposed method Lesion-Aware Generative Adversarial Networks (LAGAN) as it combines the merits of supervised learning (being lesion-aware) and adversarial training (for image generation). Additional technical treatments, such as the design of a multi-scale patch-based discriminator, further enhance the effectiveness of our proposed method. We validate the superior performance of LAGAN via comprehensive experiments on two publicly available datasets (i.e., AI Challenger and RETOUCH). Our code is available at https://github.com/dt-yuhui/LAGAN.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *