z-logo
open-access-imgOpen Access
Learning Directions of Objects Specified by Vision, Spatial Audition, or Auditory Spatial Language: Figure 1.
Author(s) -
Roberta L. Klatzky,
Yvonne Lippa,
Jack M. Loomis,
Reginald G. Golledge
Publication year - 2002
Publication title -
learning and memory
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 1.228
H-Index - 136
eISSN - 1549-5485
pISSN - 1072-0502
DOI - 10.1101/lm.51702
Subject(s) - modality (human–computer interaction) , modalities , object (grammar) , psychology , spatial analysis , computer science , spatial ability , communication , artificial intelligence , neuroscience , cognition , social science , remote sensing , sociology , geology
The modality by which object azimuths (directions) are presented affects learning of multiple locations. In Experiment 1, participants learned sets of three and five object azimuths specified by a visual virtual environment, spatial audition (3D sound), or auditory spatial language. Five azimuths were learned faster when specified by spatial modalities (vision, audition) than by language. Experiment 2 equated the modalities for proprioceptive cues and eliminated spatial cues unique to vision (optic flow) and audition (differential binaural signals). There remained a learning disadvantage for spatial language. We attribute this result to the cost of indirect processing from words to spatial representations.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here
Accelerating Research

Address

John Eccles House
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom