z-logo
open-access-imgOpen Access
Evaluating Two Ways to Train Sensitivity to Echoes to Improve Echolocation
Author(s) -
Laurie M. Heller,
Arley Schenker,
Pulkit Grover,
Madeline Gardner,
Felix Liu
Publication year - 2017
Language(s) - English
Resource type - Conference proceedings
DOI - 10.21785/icad2017.053
Subject(s) - human echolocation , headphones , computer science , echo (communications protocol) , speech recognition , virtual reality , computer vision , training (meteorology) , artificial intelligence , audiology , human–computer interaction , psychology , acoustics , medicine , neuroscience , computer network , physics , meteorology
We investigated whether training sighted individuals to attend to information in echoes could improve their active echolocation ability. We evaluated two training techniques that involved artificially generated sounds. Both artificial techniques were evaluated by their effect on natural echolocation of real objects with self-generated clicks. One group trained by discriminating sounds presented over headphones in the lab. The lateral displacement or distance of the echo was varied in a staircase procedure. The second training group used an echolocation app on a smartphone. They navigated a maze by using echo cues presented over earbuds. The echo cues had 3D audio virtual reality cues. Participants in the control condition did not improve but the majority of participants who trained did improve. The lab training is labor intensive whereas the app training was self-guided and convenient. This has implications for training methods aimed at echolocation that might ultimately be useful for navigation by visually impaired individuals.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here
Accelerating Research

Address

John Eccles House
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom