Deep learning based spectral extrapolation for dual-source, dual-energy x-ray computed tomography.

Researchers

Journal

Modalities

Models

Abstract

Data completion is commonly employed in dual-source, dual-energy computed tomography (CT) when physical or hardware constraints limit the field of view (FoV) covered by one of two imaging chains. Practically, dual-energy data completion is accomplished by estimating missing projection data based on the imaging chain with the full FoV and then by appropriately truncating the analytical reconstruction of the data with the smaller FoV. While this approach works well in many clinical applications, there are applications which would benefit from spectral contrast estimates over the larger FoV (spectral extrapolation)-e.g. model-based iterative reconstruction, contrast-enhanced abdominal imaging of large patients, interior tomography, and combined temporal and spectral imaging.
To document the fidelity of spectral extrapolation and to prototype a deep learning algorithm to perform it, we assembled a data set of 50 dual-source, dual-energy abdominal x-ray CT scans (acquired at Duke University Medical Center with 5 Siemens Flash scanners; chain A: 50 cm FoV, 100 kV; chain B: 33 cm FoV, 140 kV + Sn; helical pitch: 0.8). Data sets were reconstructed using ReconCT (v14.1, Siemens Healthineers): 768×768 pixels per slice, 50 cm FoV, 0.75 mm slice thickness, “Dual-Energy – WFBP” reconstruction mode with dual-source data completion. A hybrid architecture consisting of a learned piecewise-linear transfer function (PLTF) and a convolutional neural network (CNN) was trained using 40 scans (5 scans reserved for validation, 5 for testing). The PLTF learned to map chain A spectral contrast to chain B spectral contrast voxel-wise, performing an image-domain analog of dual-source data completion with approximate spectral reweighting. The CNN with its U-net structure then learned to improve the accuracy of chain B contrast estimates by copying chain A structural information, by encoding prior chain A, chain B contrast relationships, and by generalizing feature-contrast associations. Training was supervised, using data from within the 33 cm chain B FoV to optimize and assess network performance.
Extrapolation performance on the testing data confirmed our network’s robustness and ability to generalize to unseen data from different patients, yielding maximum extrapolation errors of 26 HU following the PLTF and 7.5 HU following the CNN (averaged per target organ). Degradation of network performance when applied to a geometrically simple phantom confirmed our method’s reliance on feature-contrast relationships in correctly inferring spectral contrast. Integrating our image-domain spectral extrapolation network into a standard dual-source, dual-energy processing pipeline for Siemens Flash scanner data yielded spectral CT data with adequate fidelity for the generation of both 50 keV monochromatic images and material decomposition images over a 30 cm FoV for chain B when only 20 cm of chain B data was available for spectral extrapolation.
Even with a moderate amount of training data, deep learning methods are capable of robustly inferring spectral contrast from feature-contrast relationships in spectral CT data, leading to spectral extrapolation performance well beyond what may be expected at face value. Future work reconciling spectral extrapolation results with original projection data is expected to further improve results in outlying and pathological cases.
This article is protected by copyright. All rights reserved.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *