|

Built to last? Reproducibility and Reusability of Deep Learning Algorithms in Computational Pathology.

Researchers

Journal

Modalities

Models

Abstract

Recent progress in computational pathology has been driven by deep learning. While code and data availability are essential to reproduce findings from preceding publications, ensuring a deep learning model’s reusability is more challenging. For that, the codebase should be well-documented and easy to integrate into existing workflows, and models should be robust towards noise and generalizable towards data from different sources. Strikingly, only a few computational pathology algorithms have been reused by other researchers so far, let alone employed in a clinical setting. To assess the current state of reproducibility and reusability of computational pathology algorithms, we evaluated peer-reviewed articles available in Pubmed, published between January 2019 and March 2021, in five use cases: stain normalization, tissue type segmentation, evaluation of cell-level features, genetic alteration prediction, and inference of grading, staging, and prognostic information. We compiled criteria for data and code availability and statistical result analysis and assessed them in 160 publications. We found that only one quarter (41 out of 160 publications) made code publicly available. Among these 41 papers, three quarters (30 out of 41) analyzed their results statistically, half of them (20 out of 41) released their trained model weights, and about a third (16 out of 41) used an independent cohort for evaluation. Our review is intended for both pathologists interested in deep learning and researchers applying algorithms to computational pathology challenges. We provide a detailed overview of publications with published code in the field, list reusable data handling tools, and provide criteria for reproducibility and reusability.Copyright © 2023. Published by Elsevier Inc.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *