z-logo
open-access-imgOpen Access
Temporally pre-presented lipreading cues release speech from informational masking
Author(s) -
Chao Wu,
Shuyang Cao,
Xihong Wu,
Liang Li
Publication year - 2013
Publication title -
the journal of the acoustical society of america
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.619
H-Index - 187
eISSN - 1520-8524
pISSN - 0001-4966
DOI - 10.1121/1.4794933
Subject(s) - masking (illustration) , quiet , priming (agriculture) , speech recognition , sentence , computer science , noise (video) , speech perception , psychology , perception , artificial intelligence , neuroscience , biology , art , physics , germination , botany , image (mathematics) , quantum mechanics , visual arts
Listeners can use temporally pre-presented content cues and concurrently presented lipreading cues to improve speech recognition under masking conditions. This study investigated whether temporally pre-presented lipreading cues also unmask speech. In a test trial, before the target sentence was co-presented with the masker, either target-matched (priming) lipreading video or static face (priming-control) video was presented in quiet. Participants' target-recognition performance was improved by a shift from the priming-control condition to the priming condition when the masker was speech but not noise. This release from informational masking suggests a combined effect of working memory and cross-modal integration on selective attention to target speech.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here
Accelerating Research

Address

John Eccles House
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom