DesTrans: A medical image fusion method based on Transformer and improved DenseNet.

Researchers

Journal

Modalities

Models

Abstract

Medical image fusion can provide doctors with more detailed data and thus improve the accuracy of disease diagnosis. In recent years, deep learning has been widely used in the field of medical image fusion. The traditional method of medical image fusion is to operate by superimposing and other methods of pixels. The introduction of deep learning methods has improved the effectiveness of medical image fusion. However, these methods still have problems such as edge blurring and information redundancy. In this paper, we propose a deep learning network model based on Transformer and an improved DenseNet network module integration that can be applied to medical images and solve the above problems. At the same time, the method can be moved to natural images. The use of Transformer and dense concatenation enhances the feature extraction capability of the method by limiting the feature loss which reduces the risk of edge blurring. We compared several representative traditional methods and more advanced deep learning methods with this method. The experimental results show that the Transformer and the improved DenseNet network module have a strong capability of feature extraction. The method yields good results both in terms of visual quality and objective image evaluation metrics.Copyright © 2024 Elsevier Ltd. All rights reserved.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *