z-logo
open-access-imgOpen Access
Speaker-independent auditory attention decoding without access to clean speech sources
Author(s) -
Cong Han,
James O’Sullivan,
Yi Luo,
Jose L. Herrero,
Ashesh D. Mehta,
Nima Mesgarani
Publication year - 2019
Publication title -
science advances
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 5.928
H-Index - 146
ISSN - 2375-2548
DOI - 10.1126/sciadv.aav6134
Subject(s) - speech recognition , computer science , decoding methods , active listening , perception , speech perception , cued speech , audiology , psychology , communication , cognitive psychology , telecommunications , medicine , neuroscience
Speech perception in crowded environments is challenging for hearing-impaired listeners. Assistive hearing devices cannot lower interfering speakers without knowing which speaker the listener is focusing on. One possible solution is auditory attention decoding in which the brainwaves of listeners are compared with sound sources to determine the attended source, which can then be amplified to facilitate hearing. In realistic situations, however, only mixed audio is available. We utilize a novel speech separation algorithm to automatically separate speakers in mixed audio, with no need for the speakers to have prior training. Our results show that auditory attention decoding with automatically separated speakers is as accurate and fast as using clean speech sounds. The proposed method significantly improves the subjective and objective quality of the attended speaker. Our study addresses a major obstacle in actualization of auditory attention decoding that can assist hearing-impaired listeners and reduce listening effort for normal-hearing subjects.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here
Accelerating Research

Address

John Eccles House
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom