|

HCTNet: A hybrid CNN-transformer network for breast ultrasound image segmentation.

Researchers

Journal

Modalities

Models

Abstract

Automatic breast ultrasound image segmentation helps radiologists to improve the accuracy of breast cancer diagnosis. In recent years, the convolutional neural networks (CNNs) have achieved great success in medical image analysis. However, it exhibits limitations in modeling long-range relations, which is unfavorable for ultrasound images with speckle noise and shadows, resulting in decreased accuracy of breast lesion segmentation. Transformer can obtain sufficient global information, but it is deficient in acquiring local details and needs to be pre-trained on large-scale datasets. In this paper, we propose a Hybrid CNN-Transformer network (HCTNet) for boosting the breast lesion segmentation in ultrasound images. In the encoder of HCTNet, Transformer Encoder Blocks (TEBlocks) are designed to learn the global contextual information, which are combined with CNNs to extract features. In the decoder of HCTNet, a Spatial-wise Cross Attention (SCA) module is developed based on the spatial attention mechanism, which reduces the semantic discrepancy with the encoder. Moreover, residual connection is used between decoder blocks to make the generated features more discriminative by aggregating contextual feature maps at different semantic scales. Extensive experiments on three public breast ultrasound datasets demonstrate that HCTNet outperforms other medical image segmentation methods and the recent semantic segmentation methods on breast ultrasound lesion segmentation.Copyright © 2023 Elsevier Ltd. All rights reserved.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *