Automated processing of social media content for radiologists: applied deep learning to radiological content on twitter during COVID-19 pandemic.

Researchers

Journal

Modalities

Models

Abstract

The purpose of this study was to develop an automated process to analyze multimedia content on Twitter during the COVID-19 outbreak and classify content for radiological significance using deep learning (DL).
Using Twitter search features, all tweets containing keywords from both “radiology” and “COVID-19” were collected for the period January 01, 2020 up to April 24, 2020. The resulting dataset comprised of 8354 tweets. Images were classified as (i) images with text (ii) radiological content (e.g., CT scan snapshots, X-ray images), and (iii) non-medical content like personal images or memes. We trained our deep learning model using Convolutional Neural Networks (CNN) on training dataset of 1040 labeled images drawn from all three classes. We then trained another DL classifier for segmenting images into categories based on human anatomy. All software used is open-source and adapted for this research. The diagnostic performance of the algorithm was assessed by comparing results on a test set of 1885 images.
Our analysis shows that in COVID-19 related tweets on radiology, nearly 32% had textual images, another 24% had radiological content, and 44% were not of radiological significance. Our results indicated a 92% accuracy in classifying images originally labeled as chest X-ray or chest CT and a nearly 99% accurate classification of images containing medically relevant text. With larger training dataset and algorithmic tweaks, the accuracy can be further improved.
Applying DL on rich textual images and other metadata in tweets we can process and classify content for radiological significance in real time.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *