
Voiced/unvoiced speech classification‐based adaptive filtering of decomposed empirical modes for speech enhancement
Author(s) -
Khaldi Kais,
Boudraa AbdelOuahab,
Turki Monia
Publication year - 2016
Publication title -
iet signal processing
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.384
H-Index - 42
eISSN - 1751-9683
pISSN - 1751-9675
DOI - 10.1049/iet-spr.2013.0425
Subject(s) - hilbert–huang transform , speech recognition , speech enhancement , computer science , filter (signal processing) , frame (networking) , speech processing , noise (video) , signal (programming language) , wavelet , energy (signal processing) , white noise , noise reduction , artificial intelligence , pattern recognition (psychology) , mathematics , statistics , computer vision , telecommunications , image (mathematics) , programming language
This study presents a speech filtering method exploiting the combined effects of the empirical mode decomposition (EMD) and the local statistics of the speech signal using the adaptive centre weighted average (ACWA) filter. The novelty lies in incorporating the frame class (voiced/unvoiced) in the conventional filtering using the EMD and the ACWA filter. The speech signal is segmented into frames and each one is broken down by the EMD into a finite number of intrinsic mode functions (IMFs). The number of filtered IMFs depends on whether the frame is voiced or unvoiced. An energy criterion is used to identify voiced frames while a stationarity index distinguishes between unvoiced and transient sequences. Reported results obtained on signals corrupted by additive noise (white, F16, factory) show that the proposed filtering in line with the frame class is very effective in removing noise components from noisy speech signal. Compared with filtering results of the wavelet, the ACWA, and the EMD‐ACWA methods, the proposed technique gives much better results in terms of average segmental signal‐to‐noise ratio and listening quality based on perceptual evaluation speech quality score.