|

A Deep Learning Tool for Automated Landmark Annotation on Hip and Pelvis Radiographs.

Researchers

Journal

Modalities

Models

Abstract

Automatic methods for labeling and segmenting pelvis structures can improve the efficiency of clinical and research workflows and reduce the variability introduced with manual labeling. The purpose of this study was to develop a single deep learning model to annotate certain anatomical structures and landmarks on antero-posterior (AP) pelvis radiographs.A total of 1,100 AP pelvis radiographs were manually annotated by three reviewers. These images included a mix of preoperative and postoperative images as well as a mix of AP pelvis and hip images. A convolutional neural network was trained to segment twenty-two different structures (7 points, 6 lines, and 9 shapes). Dice score, which measures overlap between model output and ground truth, was calculated for the shapes and lines structures, and euclidean distance (mean squared error) was calculated for point structures.Dice score averaged across all images in the test set was 0.88 and 0.80 for the shape and line structures, respectively. For the seven-point structures, average distance between real and automated annotations ranged from 1.9 to 5.6 mm, with all averages falling below 3.1 mm except for the structure labeling the center of the sacrococcygeal junction, where performance was low for both human and machine produced labels. Blinded qualitative evaluation of human and machine produced segmentations did not reveal any drastic decrease in performance of the automatic method.We present a deep learning model for automated annotation of pelvis radiographs that flexibly handles a variety of views, contrasts, and operative statuses for twenty-two structures and landmarks.Copyright © 2023 Elsevier Inc. All rights reserved.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *