z-logo
open-access-imgOpen Access
Introduction to the Special Issue on Machine Learning for Multiple Modalities in Interactive Systems and Robots
Author(s) -
Heriberto Cuayáhuitl,
Lutz Frommberger,
Nina Dethlefs,
Antoine Raux,
Mathew Marge,
Hendrik Zender
Publication year - 2014
Publication title -
acm transactions on interactive intelligent systems
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.381
H-Index - 34
eISSN - 2160-6463
pISSN - 2160-6455
DOI - 10.1145/2670539
Subject(s) - modalities , modality (human–computer interaction) , computer science , gesture , human–computer interaction , robot , artificial intelligence , social science , sociology
This special issue highlights research articles that apply machine learning to robots and other systems that interact with users through more than one modality, such as speech, gestures, and vision. For example, a robot may coordinate its speech with its actions, taking into account (audio-)visual feedback during their execution. Machine learning provides interactive systems with opportunities to improve performance not only of individual components but also of the system as a whole. However, machine learning methods that encompass multiple modalities of an interactive system are still relatively hard to find. The articles in this special issue represent examples that contribute to filling this gap

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here
Accelerating Research

Address

John Eccles House
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom