Evaluating body composition by combining quantitative spectral detector computed tomography and deep learning-based image segmentation.

Researchers

Journal

Modalities

Models

Abstract

Aim of this study was to develop and evaluate a software toolkit, which allows for a fully automated body composition analysis in contrast enhanced abdominal computed tomography leveraging the strengths of both, quantitative information from dual energy computed tomography and simple detection and segmentation tasks performed by deep convolutional neuronal networks (DCNN).
Both, public and private datasets were used to train and validate DCNN. A combination of two DCNN and quantitative thresholding was used to classify axial CT slices to the abdominal region, classify voxels as fat and muscle and to differentiate between subcutaneous and visceral fat. For validation, patients undergoing repetitive examination (±21 days) and patients who underwent concurrent bioelectrical impedance analysis (BIA) were analyzed. Concordance correlation coefficient (CCC), linear regression and Bland-Altman-Analysis were used as statistical tests.
Results provided from the algorithm toolkit were visually validated. The automated classifier was able to extract slices of interest from the full body scans with an accuracy of 98.7 %. DCNN-based segmentation for subcutaneous fat reached a mean dice similarity coefficient of 0.95. CCCs were 0.99 for both muscle and subcutaneous fat and 0.98 for visceral fat in patients undergoing repetitive examinations (n = 48). Further linear regression and Bland-Altman-Analyses suggested good agreement (r2:0.67-0.88) between the software toolkit and patients who underwent concurrent BIA (n = 39).
We describe a software toolkit allowing for an accurate analysis of body composition utilizing a combination of DCNN- and threshold-based segmentations from spectral detector computed tomography.
Copyright © 2020 Elsevier B.V. All rights reserved.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *