z-logo
open-access-imgOpen Access
An articulatory synthesizer for perceptual research
Author(s) -
Philip E. Rubin,
Thomas Baer
Publication year - 1978
Publication title -
the journal of the acoustical society of america
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.619
H-Index - 187
eISSN - 1520-8524
pISSN - 0001-4966
DOI - 10.1121/1.2016663
Subject(s) - articulator , vocal tract , computer science , articulation (sociology) , speech recognition , set (abstract data type) , perception , animation , acoustics , computer graphics (images) , physics , neuroscience , medicine , orthodontics , politics , political science , law , biology , programming language
A software artieulatory synthesizer, based upon a model developed by P. Mermelstein [J. Aeoust. Soe. Am. 53, 1070-1082 (1973)], has been implemented on a laboratory computer. The synthesizer is designed as a tool for studying the linguistically and pereeptually significant aspects of artieulatory events. A prominent feature of this system is that it easily permits modification of a limited set of key parameters that control the positions of the major artieulators: the lips, jaw, tongue body, tongue tip, velum, and hyoid bone. Time-varying control over vocal-tract shape and nasal coupling is possible by a straightforward procedure that is similar to keyframe animation: critical vocal-tract configurations are specified, along with excitation and timing information. Articulation then proceeds along a directed path between these key frames within the time script specified by the user. Such a procedure permits a sufficiently fine degree of control over articulator positions and movements. The organization of this system and its present and future applications are discussed.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here
Accelerating Research

Address

John Eccles House
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom