Premium
Electrocorticography reveals continuous auditory and visual speech tracking in temporal and occipital cortex
Author(s) -
Micheli Cristiano,
Schepers Inga M.,
Ozker Müge,
Yoshor Daniel,
Beauchamp Michael S.,
Rieger Jochem W.
Publication year - 2020
Publication title -
european journal of neuroscience
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 1.346
H-Index - 206
eISSN - 1460-9568
pISSN - 0953-816X
DOI - 10.1111/ejn.13992
Subject(s) - neurocomputational speech processing , auditory cortex , temporal cortex , psychology , electrocorticography , speech recognition , perception , speech perception , occipital lobe , superior temporal gyrus , neuroscience , functional magnetic resonance imaging , computer science , electroencephalography
During natural speech perception, humans must parse temporally continuous auditory and visual speech signals into sequences of words. However, most studies of speech perception present only single words or syllables. We used electrocorticography (subdural electrodes implanted on the brains of epileptic patients) to investigate the neural mechanisms for processing continuous audiovisual speech signals consisting of individual sentences. Using partial correlation analysis, we found that posterior superior temporal gyrus (pSTG) and medial occipital cortex tracked both the auditory and the visual speech envelopes. These same regions, as well as inferior temporal cortex, responded more strongly to a dynamic video of a talking face compared to auditory speech paired with a static face. Occipital cortex and pSTG carry temporal information about both auditory and visual speech dynamics. Visual speech tracking in pSTG may be a mechanism for enhancing perception of degraded auditory speech.