Using Auditory Display Techniques to Enhance Decision Making and Perceive Changing Environmental Data Within a 3D Virtual Game Environment
Author(s) -
James Broderick,
Jim Duggan,
Sam Redfern
Publication year - 2017
Language(s) - English
Resource type - Conference proceedings
DOI - 10.21785/icad2017.012
Subject(s) - sonification , computer science , perception , human–computer interaction , multimedia , virtual machine , virtual reality , immersion (mathematics) , graphics , computer graphics , game engine , auditory display , computer graphics (images) , mathematics , neuroscience , pure mathematics , biology , operating system
environmental data captured by sensors within the building as visual and audio sources. With several hundred sensors around the building measuring humidity, temperature, noises levels, and actual captures of sound, it can be difficult to discern between different sources when relying purely on visual representations of data. The writer explored the usage of sonification to represent data as sound in addition to the visualizations. Being able to experience data both audibly and visually makes it easier for users to not only understand the environment, but easily discern where different sources of information end and begin, and how exactly they are located compared to each other. As users analyzed the virtual environment, they could better map out how busy different areas were at what times, how it affects temperature, etc. Seeing this, it is a natural to imagine that a same user could move from simply using this technique to understand the environmental data towards using this information to adjust their real-time actions or greater goal in the environment. In fact, it has already been shown that, just like our reallife experiences, adding audio cues to our virtual environments aids our navigational ability. There have been several studies of using auditory navigation waypoints at specific goals or locations to aid users in being able to move through the environment [3][4]. Grohn and Lokki looked at measuring improvements in users finding objects within the environment using visual, audio, and audio visual cues to represent the target goals. While audio on its own was the least successful method of locating goal objects, combining visual and audio lead to users finding many more objects within a set time constraint. It was also found that users would use audio cues first to roughly locate an object before using visual stimuli for the final approach. Walker and Lindsay also looked at using auditory waypoints for navigation of an environment, specifically how users would ABSTRACT
Accelerating Research
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom
Address
John Eccles HouseRobert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom