A Clinically Practical and Interpretable Deep Model for ICU Mortality Prediction with External Validation.

Researchers

Journal

Modalities

Models

Abstract

Deep learning models are increasingly studied in the field of critical care. However, due to the lack of external validation and interpretability, it is difficult to generalize deep learning models in critical care senarios. Few works have validated the performance of the deep learning models with external datasets. To address this, we propose a clinically practical and interpretable deep model for intensive care unit (ICU) mortality prediction with external validation. We use the newly published dataset Philips eICU to train a recurrent neural network model with two-level attention mechanism, and use the MIMIC III dataset as the external validation set to verify the model performance. This model achieves a high accuracy (AUC = 0.855 on the external validation set) and have good interpretability. Based on this model, we develop a system to support clinical decision-making in ICUs.
©2020 AMIA – All rights reserved.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *