|

Deep learning visual analysis in laparoscopic surgery: a systematic review and diagnostic test accuracy meta-analysis.

Researchers

Journal

Modalities

Models

Abstract

In the past decade, deep learning has revolutionized medical image processing. This technique may advance laparoscopic surgery. Study objective was to evaluate whether deep learning networks accurately analyze videos of laparoscopic procedures.
Medline, Embase, IEEE Xplore, and the Web of science databases were searched from January 2012 to May 5, 2020. Selected studies tested a deep learning model, specifically convolutional neural networks, for video analysis of laparoscopic surgery. Study characteristics including the dataset source, type of operation, number of videos, and prediction application were compared. A random effects model was used for estimating pooled sensitivity and specificity of the computer algorithms. Summary receiver operating characteristic curves were calculated by the bivariate model of Reitsma.
Thirty-two out of 508 studies identified met inclusion criteria. Applications included instrument recognition and detection (45%), phase recognition (20%), anatomy recognition and detection (15%), action recognition (13%), surgery time prediction (5%), and gauze recognition (3%). The most common tested procedures were cholecystectomy (51%) and gynecological-mainly hysterectomy and myomectomy (26%). A total of 3004 videos were analyzed. Publications in clinical journals increased in 2020 compared to bio-computational ones. Four studies provided enough data to construct 8 contingency tables, enabling calculation of test accuracy with a pooled sensitivity of 0.93 (95% CI 0.85-0.97) and specificity of 0.96 (95% CI 0.84-0.99). Yet, the majority of papers had a high risk of bias.
Deep learning research holds potential in laparoscopic surgery, but is limited in methodologies. Clinicians may advance AI in surgery, specifically by offering standardized visual databases and reporting.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *