|

Transparency in Artificial Intelligence Research: a Systematic Review of Availability Items Related to Open Science in Radiology and Nuclear Medicine.

Researchers

Journal

Modalities

Models

Abstract

Reproducibility of artificial intelligence (AI) research has become a growing concern. One of the fundamental reasons is the lack of transparency in data, code, and model. In this work, we aimed to systematically review the radiology and nuclear medicine papers on AI in terms of transparency and open science.A systematic literature search was performed in PubMed to identify original research studies on AI. The search was restricted to studies published in Q1 and Q2 journals that are also indexed on the Web of Science. A random sampling of the literature was performed. Besides six baseline study characteristics, a total of five availability items were evaluated. Two groups of independent readers including eight readers participated in the study. Inter-rater agreement was analyzed. Disagreements were resolved with consensus.Following eligibility criteria, we included a final set of 194 papers. The raw data was available in about one-fifth of the papers (34/194; 18%). However, the authors made their private data available only in one paper (1/161; 1%). About one-tenth of the papers made their pre-modeling (25/194; 13%), modeling (28/194; 14%), or post-modeling files (15/194; 8%) available. Most of the papers (189/194; 97%) did not attempt to create a ready-to-use system for real-world usage. Data origin, use of deep learning, and external validation had statistically significantly different distributions. The use of private data alone was negatively associated with the availability of at least one item (p<0.001).Overall rates of availability for items were poor, leaving room for substantial improvement.Copyright Ā© 2022 The Association of University Radiologists. Published by Elsevier Inc. All rights reserved.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *