| |

Classifying Characteristics of Opioid Use Disorder From Hospital Discharge Summaries Using Natural Language Processing.

Researchers

Journal

Modalities

Models

Abstract

Opioid use disorder (OUD) is underdiagnosed in health system settings, limiting research on OUD using electronic health records (EHRs). Medical encounter notes can enrich structured EHR data with documented signs and symptoms of OUD and social risks and behaviors. To capture this information at scale, natural language processing (NLP) tools must be developed and evaluated. We developed and applied an annotation schema to deeply characterize OUD and related clinical, behavioral, and environmental factors, and automated the annotation schema using machine learning and deep learning-based approaches.Using the MIMIC-III Critical Care Database, we queried hospital discharge summaries of patients with International Classification of Diseases (ICD-9) OUD diagnostic codes. We developed an annotation schema to characterize problematic opioid use, identify individuals with potential OUD, and provide psychosocial context. Two annotators reviewed discharge summaries from 100 patients. We randomly sampled patients with their associated annotated sentences and divided them into training (66 patients; 2,127 annotated sentences) and testing (29 patients; 1,149 annotated sentences) sets. We used the training set to generate features, employing three NLP algorithms/knowledge sources. We trained and tested prediction models for classification with a traditional machine learner (logistic regression) and deep learning approach (Autogluon based on ELECTRA’s replaced token detection model). We applied a five-fold cross-validation approach to reduce bias in performance estimates.The resulting annotation schema contained 32 classes. We achieved moderate inter-annotator agreement, with F1-scores across all classes increasing from 48 to 66%. Five classes had a sufficient number of annotations for automation; of these, we observed consistently high performance (F1-scores) across training and testing sets for drug screening (training: 91-96; testing: 91-94) and opioid type (training: 86-96; testing: 86-99). Performance dropped from training and to testing sets for other drug use (training: 52-65; testing: 40-48), pain management (training: 72-78; testing: 61-78) and psychiatric (training: 73-80; testing: 72). Autogluon achieved the highest performance.This pilot study demonstrated that rich information regarding problematic opioid use can be manually identified by annotators. However, more training samples and features would improve our ability to reliably identify less common classes from clinical text, including text from outpatient settings.Copyright © 2022 Poulsen, Freda, Troiani, Davoudi and Mowery.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *