|

Autoencoder based self-supervised test-time adaptation for medical image analysis.

Researchers

Journal

Modalities

Models

Abstract

Deep neural networks have been successfully applied to medical image analysis tasks like segmentation and synthesis. However, even if a network is trained on a large dataset from the source domain, its performance on unseen test domains is not guaranteed. The performance drop on data obtained differently from the network’s training data is a major problem (known as domain shift) in deploying deep learning in clinical practice. Existing work focuses on retraining the model with data from the test domain, or harmonizing the test domain’s data to the network training data. A common practice is to distribute a carefully-trained model to multiple users (e.g., clinical centers), and then each user uses the model to process their own data, which may have a domain shift (e.g., varying imaging parameters and machines). However, the lack of availability of the source training data and the cost of training a new model often prevents the use of known methods to solve user-specific domain shifts. Here, we ask whether we can design a model that, once distributed to users, can quickly adapt itself to each new site without expensive retraining or access to the source training data? In this paper, we propose a model that can adapt based on a single test subject during inference. The model consists of three parts, which are all neural networks: a task model (T) which performs the image analysis task like segmentation; a set of autoencoders (AEs); and a set of adaptors (As). The task model and autoencoders are trained on the source dataset and can be computationally expensive. In the deployment stage, the adaptors are trained to transform the test image and its features to minimize the domain shift as measured by the autoencoders’ reconstruction loss. Only the adaptors are optimized during the testing stage with a single test subject thus is computationally efficient. The method was validated on both retinal optical coherence tomography (OCT) image segmentation and magnetic resonance imaging (MRI) T1-weighted to T2-weighted image synthesis. Our method, with its short optimization time for the adaptors (10 iterations on a single test subject) and its additional required disk space for the autoencoders (around 15 MB), can achieve significant performance improvement. Our code is publicly available at: https://github.com/YufanHe/self-domain-adapted-network.Copyright © 2021 Elsevier B.V. All rights reserved.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *