Structure attention co-training neural network for neovascularization segmentation in intravascular optical coherence tomography.

Researchers

Journal

Modalities

Models

Abstract

To development and validate a Neovascularization (NV) segmentation model in intravascular optical coherence tomography (IVOCT) through deep learning methods.A total of 1950 2D slices of 70 IVOCT pullbacks were used in our study. We randomly selected 1273 2D slices from 44 patients as the training set, 379 2D slices from 11 patients as the validation set, and 298 2D slices from the last 15 patients as the testing set. Automatic NV segmentation is quite challenging, as it must address issues of speckle noise, shadow artifacts, high distribution variation, etc. To meet these challenges, a new deep learning-based segmentation method is developed based on a co-training architecture with an integrated structural attention mechanism. Co-training is developed to exploit the features of three consecutive slices. The structural attention mechanism comprises spatial and channel attention modules and is integrated into the co-training architecture at each up-sampling step. A cascaded fixed network is further incorporated to achieve segmentation at the image level in a coarse-to-fine manner.Extensive experiments were performed involving a comparison with several state-of-the-art deep learning-based segmentation methods. Moreover, the consistency of the results with those of manual segmentation was also investigated. Our proposed NV automatic segmentation method achieved the highest correlation with the manual delineation by interventional cardiologists (the Pearson correlation coefficient is 0.825).In this work, we proposed a co-training architecture with an integrated structural attention mechanism to segment NV in IVOCT images. The good agreement between our segmentation results and manual segmentation indicates that the proposed method has great potential for application in the clinical investigation of NV-related plaque diagnosis and treatment. This article is protected by copyright. All rights reserved.This article is protected by copyright. All rights reserved.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *