|

Automatic Segmentation for Analysis of Murine Cardiac Ultrasound and Photoacoustic Image Data Using Deep Learning.

Researchers

Journal

Modalities

Models

Abstract

Although there are methods to identify regions of interest (ROIs) from echocardiographic images of myocardial tissue, they are often time-consuming and difficult to create when image quality is poor. Further, while myocardial strain from ultrasound (US) images can be estimated, US alone cannot obtain functional information, such as oxygen saturation (sO2). Photoacoustic (PA) imaging, however, can be used to quantify sO2 levels within tissue non-invasively.Here, we leverage deep learning methods to improve segmentation of the anterior wall of the left ventricle and apply both strain and oxygen saturation analysis via segmentation of murine US and PA images.Data revealed that training on US/PA images using a U-Net deep neural network can be used to create reproducible ROIs of the anterior wall of the left ventricle in a murine image dataset. Accuracy and Dice score metrics were used to evaluate performance of the neural network on each image type. We report an accuracy of 97.3% and Dice score of 0.84 for ultrasound, 95.6% and 0.73 for photoacoustic, and 96.5% and 0.81 for combined ultrasound and photoacoustic images.Rapid segmentation via such methods can assist in quantification of strain and oxygenation.Copyright © 2024 World Federation for Ultrasound in Medicine & Biology. Published by Elsevier Inc. All rights reserved.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *