z-logo
open-access-imgOpen Access
Transfer Learning for Wearable Long-Term Social Speech Evaluations
Author(s) -
Yuanpeng Chen,
Bin Gao,
Long Jiang,
Kai Yin,
Jun Gu,
Wai Lok Woo
Publication year - 2018
Publication title -
ieee access
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.587
H-Index - 127
ISSN - 2169-3536
DOI - 10.1109/access.2018.2876122
Subject(s) - aerospace , bioengineering , communication, networking and broadcast technologies , components, circuits, devices and systems , computing and processing , engineered materials, dielectrics and plasmas , engineering profession , fields, waves and electromagnetics , general topics for engineers , geoscience , nuclear engineering , photonics and electrooptics , power, energy and industry applications , robotics and control systems , signal processing and analysis , transportation
With an increase of stress in work and study environments, mental health issue has become a major subject in current social interaction research. Generally, researchers analyze psychological health states by using the social perception behavior. Speech signal processing is an important research direction, as it can objectively assess the mental health of a person from social sensing through the extraction and analysis of speech features. In this paper, a series of four-week long-term social monitoring experiment study using the proposed wearable device has been conducted. A set of well-being questionnaires among of a group of students is employed to objectively generate a relationship between physical and mental health with segmented speech-social features in completely natural daily situation. In particular, we have developed transfer learning for acoustic classification. By training the model on the TUT Acoustic Scenes 2017 data set, the model learns the basic scene features. Through transfer learning, the model is transferred to the audio segmentation process using only four wearable speech-social features (energy, entropy, brightness, and formant). The obtained results have shown promising results in classifying various acoustic scenes in unconstrained and natural situations using the wearable long-term speech-social data set.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here
Accelerating Research

Address

John Eccles House
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom