z-logo
open-access-imgOpen Access
Gain-field modulation mechanism in multimodal networks for spatial perception
Author(s) -
Alexandre Pitti,
Arnaud Blanchard,
Matthieu Cardinaux,
Philippe Gaussier
Publication year - 2012
Publication title -
hal (le centre pour la communication scientifique directe)
Language(s) - English
Resource type - Conference proceedings
ISSN - 2164-0572
DOI - 10.1109/humanoids.2012.6651535
Subject(s) - computer science , perception , computer vision , stimulus (psychology) , modality (human–computer interaction) , modalities , multisensory integration , artificial intelligence , reference frame , speech recognition , frame (networking) , psychology , neuroscience , telecommunications , social science , sociology , psychotherapist
Seeing is not just done through the eyes, it involves the integration of other modalities such as auditory, proprioceptive and tactile information, to locate objects, persons and also the limbs. We hypothesize that the neural mechanism of gain-field modulation, which is found to process coordinate transform between modalities in the superior colliculus and in the parietal area, plays a key role to build such unified perceptual world. In experiments with a head-neck-eye's robot with a camera and microphones, we study how gain-field modulation in neural networks can serve for transcribing one modality's reference frame into another one (e.g., audio signals into eyes' coordinate). It follows that each modality influences the estimations of the position of a stimulus (multimodal enhancement). This can be used in example for mapping sound signals into retina coordinates for audio-visual speech perception.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here
Accelerating Research

Address

John Eccles House
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom