Multi-level spatial-temporal and attentional information deep fusion network for retinal vessel segmentation.

Researchers

Journal

Modalities

Models

Abstract

Automatic segmentation of fundus vessels has the potential to enhance the judgment ability of intelligent disease diagnosis systems. Even though various methods have been proposed, it’s still a demanding task to accurately segment the fundus vessels. The purpose of our study is to develop a robust and effective method to segment the vessels in human color retinal fundus images.We present a novel multi-level spatial-temporal and attentional information deep fusion network for the segmentation of retinal vessels, called MSAFNet, which enhances segmentation performance and robustness. Our method utilizes the Multi-level Spatial- temporal Encoding module to obtain spatial-temporal information and the Self- Attention module to capture feature correlations in different levels of our network. Based on the encoder and decoder structure, we combine these features to get the final segmentation results.Through abundant experiments on four public datasets, our method achieves preferable performance compared with other SOTA retinal vessel segmentation methods. Our ACC and AUC achieve the highest scores of 96.96%, 96.57%, 96.48% and 98.78%, 98.54%, 98.27% on DRIVE, CHASE DB1, and HRF datasets. Our SP achieves the highest score of 98.58% and 99.08% on DRIVE and STARE datasets.The experimental results demonstrate that our method has strong learning and 
representation capabilities and can accurately detect retinal blood vessels, thereby serving as a potential tool for assisting in diagnosis.© 2023 Institute of Physics and Engineering in Medicine.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *