Deconstructing multisensory enhancement in detection
Author(s) -
Mario Pannunzi,
Alexis Pérez-Bellido,
Alexandre Pereda-Baños,
Joan LópezMoliner,
Gustavo Deco,
Salvador SotoFaraco
Publication year - 2014
Publication title -
journal of neurophysiology
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 1.302
H-Index - 245
eISSN - 1522-1598
pISSN - 0022-3077
DOI - 10.1152/jn.00341.2014
Subject(s) - computer science , probabilistic logic , benchmark (surveying) , machine learning , artificial intelligence , modal , statistical model , psychophysics , modalities , task (project management) , contrast (vision) , perception , psychology , geodesy , social science , chemistry , management , neuroscience , sociology , economics , polymer chemistry , geography
The mechanisms responsible for the integration of sensory information from different modalities have become a topic of intense interest in psychophysics and neuroscience. Many authors now claim that early, sensory-based cross-modal convergence improves performance in detection tasks. An important strand of supporting evidence for this claim is based on statistical models such as the Pythagorean model or the probabilistic summation model. These models establish statistical benchmarks representing the best predicted performance under the assumption that there are no interactions between the two sensory paths. Following this logic, when observed detection performances surpass the predictions of these models, it is often inferred that such improvement indicates cross-modal convergence. We present a theoretical analyses scrutinizing some of these models and the statistical criteria most frequently used to infer early cross-modal interactions during detection tasks. Our current analysis shows how some common misinterpretations of these models lead to their inadequate use and, in turn, to contradictory results and misleading conclusions. To further illustrate the latter point, we introduce a model that accounts for detection performances in multimodal detection tasks but for which surpassing of the Pythagorean or probabilistic summation benchmark can be explained without resorting to early cross-modal interactions. Finally, we report three experiments that put our theoretical interpretation to the test and further propose how to adequately measure multimodal interactions in audiotactile detection tasks.
Accelerating Research
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom
Address
John Eccles HouseRobert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom