z-logo
open-access-imgOpen Access
User-guided Rendering of Audio Objects Using an Interactive Genetic Algorithm
Author(s) -
Alex James Wilson,
Bruno Fazenda
Publication year - 2019
Publication title -
journal of the audio engineering society
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.234
H-Index - 60
ISSN - 1549-4950
DOI - 10.17743/jaes.2019.0035
Subject(s) - computer science , rendering (computer graphics) , computer graphics (images) , genetic algorithm , computer vision , human–computer interaction , multimedia , artificial intelligence , machine learning
Object-based audio allows for personalisation of content, perhaps to improve accessibility or to increase quality of experience more generally. This paper describes the design and evaluation of an interactive audio renderer, which is used to optimise an audio mix based on the feedback of the listener. A panel of 14 trained participants were recruited to trial the system. The range of audio mixes produced using the proposed system was comparable to the range of mixes achieved using a traditional fader-based mixing interface. Evaluation using the System Usability Scale showed a low level of physical and mental burden, making this a suitable interface for users with impairments, such as vision and/or mobility.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here
Accelerating Research

Address

John Eccles House
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom