Suppression of clipping noise in observed speech based on spectral compensation with Gaussian mixture models and reference of clean speech
Author(s) -
Makoto Hayakawa,
Takahiro Fukumori,
Masato Nakayama,
Takanobu Nishiura
Publication year - 2013
Publication title -
proceedings of meetings on acoustics
Language(s) - English
Resource type - Conference proceedings
ISSN - 1939-800X
DOI - 10.1121/1.4800260
Subject(s) - clipping (morphology) , computer science , speech recognition , noise (video) , gaussian noise , noise measurement , speech enhancement , spectral envelope , linear predictive coding , speech coding , acoustics , background noise , noise reduction , artificial intelligence , telecommunications , physics , philosophy , linguistics , image (mathematics)
In recent years, the development of communication system allows people to easily record and distribute their speech. However, in the speech recording, clipping noise degrades sound quality when the level of input signal is excessive for the maximum range of amplifier. In this case, it is necessary to suppress clipping noise in the observed speech for improving its sound quality. Although a linear prediction method has been conventionally proposed for suppressing clipping noise, it has a problem with degradation of the restoration performance by cumulating error when the speech includes a large amount of clipping noise. This paper describes a method for suppression of clipping noise in observed speech based on spectral compensation. In this method, the power spectral envelope of speech on each frame in the lower frequency band is noise suppressed to by using GMM (Gaussian Mixture Models), and the one in the higher frequency band is restored by referring to the clean speech. We carried out evaluation experi...
Accelerating Research
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom
Address
John Eccles HouseRobert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom