|

A 3D/2D Hybrid U-Net CNN approach to prostate organ segmentation of mpMRI.

Researchers

Journal

Modalities

Models

Abstract

Background: Prostate cancer is the most commonly diagnosed male cancer in the U.S. with over 200,000 new cases in 2018. Multiparametric magnetic resonance imaging (mpMRI) is increasingly used for prostate cancer evaluation. Prostate organ segmentation is an essential step of surgical planning for prostate fusion biopsies. Deep learning convolutional neural networks (CNN) are the predominant method of machine learning for medical image recognition. In this study, we describe a deep learning approach, a subset of artificial intelligence, for automatic localization and segmentation of prostates from mpMRIs. Materials and Methods: This retrospective study included patients, between September 2014 and December 2016, who had a prostate MRI and an ultrasound MRI fusion transrectal biopsy. Axial T2 weighted images were manually segmented by two abdominal radiologists which served as Ground Truth. These manually segmented images were used for training on a customized Hybrid 3D/2D U-Net CNN architecture in a five-fold cross validation paradigm for neural network training and validation. Dice score, a measure of overlap between manually segmented and automatically derived segmentations, and Pearson linear correlation of prostate volume were used for statistical evaluation. Results: The CNN was trained on 287 patients which yielded 299 MRIs (7,774 total number of images). The customized Hybrid 3D/2D U-Net had a Dice score of 0.898 (0.890 – 0.908) and a Pearson correlation coefficient for volume of 0.974. Conclusion: A deep learning CNN can automatically segment the prostate organ from clinical MRI images. Further studies should examine developing pattern recognition for lesion localization and quantification.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *