z-logo
open-access-imgOpen Access
Visual synthesis of speech
Author(s) -
Yolanda Blanco,
Arantxa Villanueva,
Rafael Cabeza
Publication year - 2009
Publication title -
anales del sistema sanitario de navarra
Language(s) - Uncategorized
Resource type - Journals
SCImago Journal Rank - 0.175
H-Index - 23
eISSN - 2340-3527
pISSN - 1137-6627
DOI - 10.23938/assn.0730
Subject(s) - construct (python library) , interpretation (philosophy) , computer science , gaze , speech synthesis , interface (matter) , human–computer interaction , speech technology , state (computer science) , visualization , speech recognition , artificial intelligence , programming language , bubble , maximum bubble pressure method , parallel computing
The eyes can come to be the sole tool of communication for highly disabled patients. With the appropriate technology it is possible to successfully interpret eye movements, increasing the possibilities of patient communication with the use of speech synthesisers. A system of these characteristics will have to include a speech synthesiser, an interface for the user to construct the text and a method of gaze interpretation. In this way a situation will be achieved in which the user will manage the system solely with his eyes. This review sets out the state of the art of the three modules that make up a system of this type, and finally it introduces the speech synthesis system (Síntesis Visual del Habla [SiVHa]), which is being developed in the Public University of Navarra.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here