Using wavelets to synthesize stochastic-based sounds for immersive virtual environments
Author(s) -
Nadine E. Miner,
Thomas P. Caudell
Publication year - 2005
Publication title -
acm transactions on applied perception
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.265
H-Index - 49
eISSN - 1544-3965
pISSN - 1544-3558
DOI - 10.1145/1101530.1101552
Subject(s) - computer science , wavelet , variety (cybernetics) , set (abstract data type) , process (computing) , sound (geography) , perception , speech recognition , virtual machine , human–computer interaction , artificial intelligence , acoustics , physics , neuroscience , biology , programming language , operating system
Stochastic, or nonpitched, sounds fill our real-world environment. Humans almost continuously hear stochastic sounds, such as wind, rain, motor sounds, and different types of impact sounds. Because of their prevalence in real-world environments, it is important to include these types of sounds for realistic virtual environment simulations. This paper describes a synthesis approach that uses wavelets for modeling stochastic-based sounds. Parameterizations of the wavelet models yield a variety of related sounds from a small set of models. The result is dynamic sound models that can change according to changes in the virtual environment. This paper contains a description of the sound synthesis process, several developed models, and the on-going perceptual experiments for validating the sound synthesis veracity. The developed models and results demonstrate proof of the concept and illustrate the potential of this approach.
Accelerating Research
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom
Address
John Eccles HouseRobert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom