|

Whole Spine Segmentation Using Object Detection and Semantic Segmentation.

Researchers

Journal

Modalities

Models

Abstract

Virtual and augmented reality have enjoyed increased attention in spine surgery. Preoperative planning, pedicle screw placement, and surgical training are among the most studied use cases. Identifying osseous structures is a key aspect of navigating a 3D virtual reconstruction. To automate the otherwise time-consuming process of labelling vertebrae on each slice individually, we propose a fully automated pipeline that automates segmentation on computed tomography (CT) and which can form the basis for further virtual or augmented reality application and radiomic analysis.Based on a large public dataset of annotated vertebral computed tomography scans, we first trained a YOLOv8m to detect each vertebra individually. On the then cropped images, a 2D-Unet was developed and externally validated on two different public datasets.214 CT scans (cervical, thoracic, or lumbar spine) were used for model training, and 40 scans were used for external validation. Vertebra recognition achieved a mAP50 of over 0.84, and the segmentation algorithm attained a mean Dice score of 0.75 ± 0.14 at internal, 0.77 ± 0.12 and 82 ± 0.14 at external validation, respectively.We propose a two-stage approach consisting of single vertebra labelling by an object detection algorithm followed by semantic segmentation. In our externally validated pilot study, we demonstrate robust performance for our object detection network in identifying individual vertebrae, as well as for our segmentation model in precisely delineating the bony structures.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *