z-logo
open-access-imgOpen Access
Online Multimodal Interaction for Speech Interpretation
Author(s) -
Vaishali Ingle,
Aditi Deshpande
Publication year - 2010
Publication title -
international journal of computer applications
Language(s) - English
Resource type - Journals
ISSN - 0975-8887
DOI - 10.5120/398-594
Subject(s) - computer science , interpretation (philosophy) , natural language processing , human–computer interaction , speech recognition , artificial intelligence , programming language
In this paper, we describe an implementation of multimodal interaction for speech interpretation to enable access to the Web. As per W3C recommendation on 10 February 2009 the latest version of, EMMA is used for translation of speech signals into a format interpreted by the application language, greatly simplifying the process of adding multiple modes to an application. EMMA is used for annotating the interpretation of user input. The lattice is designed by considering the model, architecture, input modalities. The interpretation of the user's input is expected to be generated by signal interpretation process by speech.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here
Accelerating Research

Address

John Eccles House
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom