|

Monaural cardiopulmonary sound separation via complex-valued deep autoencoder and cyclostationarity.

Researchers

Journal

Modalities

Models

Abstract

Cardiopulmonary auscultation is promising to get smart due to the emerging of electronic stethoscopes. Cardiac and lung sounds often appear mixed at both time and frequency domain, hence deteriorating the auscultation quality and the further diagnosis performance. The conventional cardiopulmonary sound separation methods may be challenged by the diversity in cardiac/lung sounds. In this study, the data-driven feature learning advantage of deep autoencoder and the common quasi-cyclostationarity characteristic are exploited for monaural separation.Different from most of the existing separation methods that only handle the amplitude of short-time Fourier transform (STFT) spectrum, a complex-valued U-net (CUnet) with deep autoencoder structure, is built to fully exploit both the amplitude and phase information. As a common characteristic of cardiopulmonary sounds, quasi-cyclostationarity of cardiac sound is involved in the loss function for training.In experiments to separate cardiac/lung sounds for heart valve disorder auscultation, the averaged achieved signal distortion ratio (SDR), signal interference ratio (SIR), and signal artifact ratio (SAR) in cardiac sounds are 7.84 dB, 21.72 dB, and 8.06 dB, respectively. The detection accuracy of aortic stenosis can be raised from 92.21% to 97.90%.The proposed method can promote the cardiopulmonary sound separation performance, and may improve the detection accuracy for cardiopulmonary diseases.© 2022 IOP Publishing Ltd.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *