Establishing a Validation Infrastructure for Imaging-Based AI Algorithms Prior to Clinical Implementation.

Researchers

Journal

Modalities

Models

Abstract

With promising Artificial Intelligence (AI) algorithms receiving FDA (Food and Drug Administration) clearance, the potential impact of these models on clinical outcomes must be evaluated locally before their integration into routine workflows. Robust validation infrastructures are pivotal to inspecting the accuracy and generalizability of these deep learning algorithms to ensure both patient safety and health equity. Protected Health Information (PHI) concerns, intellectual property rights, and diverse requirements of models impede the development of rigorous external validation infrastructures. Our work proposes various suggestions for addressing the challenges associated with the development of efficient, customizable, and cost-effective infrastructures for the external validation of AI models at large medical centers and institutions. We present comprehensive steps to establish an AI inferencing infrastructure outside clinical systems to examine the local performance of AI algorithms before health practice- or system-wide implementation and promote an evidence-based approach for adopting AI models that can enhance radiology workflows and improve patient outcomes.Copyright © 2024. Published by Elsevier Inc.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *