z-logo
open-access-imgOpen Access
Articulatory-Acoustic Analyses of Mandarin Words in Emotional Context Speech for Smart Campus
Author(s) -
Guofeng Ren,
Xueying Zhang,
Shufei Duan
Publication year - 2018
Publication title -
ieee access
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.587
H-Index - 127
ISSN - 2169-3536
DOI - 10.1109/access.2018.2865831
Subject(s) - aerospace , bioengineering , communication, networking and broadcast technologies , components, circuits, devices and systems , computing and processing , engineered materials, dielectrics and plasmas , engineering profession , fields, waves and electromagnetics , general topics for engineers , geoscience , nuclear engineering , photonics and electrooptics , power, energy and industry applications , robotics and control systems , signal processing and analysis , transportation
Recent years, along with the promotion of smart campus, social networks developed rapidly which demands high accuracy of man-man & man-machine interaction technologies. Thus, physiological information in speech interactive processing has become an important complement or even replaced acousticbased features. With the aim of assessing the influence of emotions on articulatory-acoustic features in speech production, the current study explored the articulatory mechanism that underlying emotional speech production of Mandarin words. We first used the AG501 EMA device to collect articulatory and acoustic data synchronously as subjects were speaking specific words in Mandarin with different emotions, e.g., anger, sadness, happiness, and neutral; articulatory and acoustic features then were extracted from the collected data and analyzed in a one-way ANOVA to discover the significance of emotions on articulatory and acoustic features. The results illustrated that the motion of articulators (tongue and lip) were influenced by emotions significantly; in detail, the motion range of tongue and lip with anger were larger than other emotions, meanwhile, tongue speed and lip speed with anger and happiness were more sensitive than with sadness and neutral in emotional words. Results had been discussed to discover the relationship between acoustic and articulatory features of emotional speech, and then the conclusion can be acquired that articulatory motion feature (tongue and lip) may be the major feature of emotional speech recognition, so that which can be applied to the man-machine interaction of smart campus research in the future.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here
Accelerating Research

Address

John Eccles House
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom