Premium
Tongue habit discrimination system using acoustical feature for oral habits improvement
Author(s) -
Nakayama Masashi,
Ishimitsu Shunsuke,
Yamashita Kimiko,
Ishii Kaori,
Kasai Kazutaka,
Horihata Satoshi
Publication year - 2018
Publication title -
electronics and communications in japan
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.131
H-Index - 13
eISSN - 1942-9541
pISSN - 1942-9533
DOI - 10.1002/ecj.12079
Subject(s) - tongue , speech recognition , pronunciation , mel frequency cepstrum , cepstrum , loudness , feature (linguistics) , computer science , acoustics , timbre , feature extraction , pattern recognition (psychology) , artificial intelligence , computer vision , medicine , physics , art , musical , philosophy , linguistics , pathology , visual arts
Oral habits are tongue protrusion in malocclusions, causing deterioration of oral functions necessary for feeding, chewing, swallowing, and vocalization. In order to realize a noninvasive measurement of the habits, we propose and experiment acoustic feature analysis to discriminate tongue habits. Compared to normal speech, tongue‐protruded speech is pronounced between the frontal teeth. The speech is emphasized at a wide‐range band of frequency components due to turbulence, as can be heard in the pronunciation of consonants. In this paper, we confirm these differences in acoustic features, such as zero‐crossing that can capture the characteristics of voiced and unvoiced sounds and Mel Frequency Cepstrum Coefficient (MFCC) that is a filter bank analysis for front‐end processing at speech recognition. We collect samples for that focus on the differences in oral habits of subjects, and significant of acoustic features that measured from the samples are confirmed. Finally, tongue habit discrimination using k ‐nearest neighbor algorithm achieved discrimination rate of about 85% to 98% on the databases.