Training Deep Network Ultrasound Beamformers with Unlabeled In Vivo Data.

Researchers

Journal

Modalities

Models

Abstract

Conventional delay-and-sum (DAS) beamforming is highly efficient but also suffers from various sources of image degradation. Several adaptive beamformers have been proposed to address this problem, including more recently proposed deep learning methods. With deep learning, adaptive beamforming is typically framed as a regression problem, where clean ground-truth physical information is used for training. Because it is difficult to know ground truth information in vivo, training data are usually simulated. However, deep networks trained on simulations can produce suboptimal in vivo image quality because of a domain shift between simulated and in vivo data. In this work, we propose a novel domain adaptation (DA) scheme to correct for domain shift by incorporating unlabeled in vivo data during training. Unlike classification tasks for which both input domains map to the same target domain, a challenge in our regression-based beamforming scenario is that domain shift exists in both the input and target data. To solve this problem, we leverage cycle-consistent generative adversarial networks to map between simulated and in vivo data in both the input and ground truth target domains. Additionally, to account for separate as well as shared features between simulations and in vivo data, we use augmented feature mapping to train domain-specific beamformers. Using various types of training data, we explore the limitations and underlying functionality of the proposed DA approach. Additionally, we compare our proposed approach to several other adaptive beamformers. Using the DA DNN beamformer, consistent in vivo image quality improvements are achieved compared to established techniques.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *