z-logo
open-access-imgOpen Access
Wearables for Respiratory Sound Classification
Author(s) -
R. Shivapathy,
Steny Saji,
Nishi Shahnaj Haider
Publication year - 2021
Publication title -
journal of physics. conference series
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.21
H-Index - 85
eISSN - 1742-6596
pISSN - 1742-6588
DOI - 10.1088/1742-6596/1937/1/012055
Subject(s) - auscultation , respiratory sounds , computer science , artificial intelligence , hilbert–huang transform , speech recognition , noise (video) , wheeze , pattern recognition (psychology) , sound (geography) , wearable computer , machine learning , respiratory system , medicine , computer vision , radiology , asthma , filter (signal processing) , geomorphology , image (mathematics) , embedded system , geology
Respiratory disorders being one of the leading causes of deaths in the world, auscultation is one of the most popular methods used in early diagnosis and prevention, but this method faces drawbacks due to human errors. Hence the importance of an automated diagnosis method is being considered. This article investigates the classification of normal and adventitious respiratory sound analysis using the deep CNN RNN model. The classification models and strategies classify breathing sound anomalies such as wheeze and crackle for automated diagnosis of respiratory sounds. The data received through acquisition is denoised with Ensemble Empirical Mode Decomposition, a noise assisted version of the EMD algorithm. The features of respiratory sound are extracted and sent for training in the CNN-RNN model for classification. The proposed classification model scores an accuracy of 0.98, sensitivity of 0.96 and specificity of 1 for the four class prediction.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here