Bumble bees display cross-modal object recognition between visual and tactile senses
Author(s) -
Cwyn Solvi,
Selene Gutierrez Al-Khudhairy,
Lars Chittka
Publication year - 2020
Publication title -
science
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 12.556
H-Index - 1186
eISSN - 1095-9203
pISSN - 0036-8075
DOI - 10.1126/science.aay8064
Subject(s) - object (grammar) , perspective (graphical) , modal , mental image , cognitive psychology , cognitive neuroscience of visual object recognition , cognition , psychology , computer science , communication , artificial intelligence , computer vision , cognitive science , neuroscience , chemistry , polymer chemistry
Many animals can associate object shapes with incentives. However, such behavior is possible without storing images of shapes in memory that are accessible to more than one sensory modality. One way to explore whether there are modality-independent internal representations of object shapes is to investigate cross-modal recognition-experiencing an object in one sensory modality and later recognizing it in another. We show that bumble bees trained to discriminate two differently shaped objects (cubes and spheres) using only touch (in darkness) or vision (in light, but barred from touching the objects) could subsequently discriminate those same objects using only the other sensory information. Our experiments demonstrate that bumble bees possess the ability to integrate sensory information in a way that requires modality-independent internal representations.
Accelerating Research
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom
Address
John Eccles HouseRobert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom