|

Multi-input adaptive neural network for automatic detection of cervical vertebral landmarks on X-rays.

Researchers

Journal

Modalities

Models

Abstract

Cervical vertebral landmark detection is a significant pre-task for vertebral relative motion parameter measurement, which is helpful for doctors to diagnose cervical spine diseases. Accurate cervical vertebral landmark detection could provide reliable motion parameter measurement results. However, different cervical spines in X-rays with various poses and angles have imposed quite challenges. It is observed that there are similar appearances of vertebral bones in different cervical spine X-rays. For this, to fully use these similar features, a multi-input adaptive U-Net (MultiIA-UNet) focusing on the similar local features between different cervical spine X-rays is put forward to do cervical vertebral landmark detection accurately and effectively. MultiIA-UNet used an improved U-Net structure as backbone network combining with the novel adaptive convolution module to better extract changing global features. At training, MultiIA-UNet applied a multi-input strategy to extract features from random pairs of training data at the same time, and then learned their similar local features through a subspace alignment module. We collected a dataset including 688 cervical spine X-rays to evaluate MultiIA-UNet. The results exhibited that our method demonstrated the state-of-the-art performance (the minimum average point to point error of 12.988 pixels). In addition, we further evaluated the effect of these landmark detection results on cervical motion angle parameter measurement. It showed that our method was capable to obtain more accurate cervical spine motion angle parameters (the minimum symmetric mean absolute percentage is 26.969%). MultiIA-UNet could be an efficient and accurate landmark detection method for doctors to do cervical vertebral motion analysis.Copyright © 2022 Elsevier Ltd. All rights reserved.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *