Premium
Task‐free auditory EEG paradigm for probing multiple levels of speech processing in the brain
Author(s) -
Gansonre Christelle,
Højlund Andreas,
Leminen Alina,
Bailey Christopher,
Shtyrov Yury
Publication year - 2018
Publication title -
psychophysiology
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 1.661
H-Index - 156
eISSN - 1469-8986
pISSN - 0048-5772
DOI - 10.1111/psyp.13216
Subject(s) - psychology , task (project management) , electroencephalography , speech processing , comprehension , cognitive psychology , speech perception , search engine indexing , speech recognition , perception , computer science , artificial intelligence , management , neuroscience , economics , programming language , psychiatry
While previous studies on language processing highlighted several ERP components in relation to specific stages of sound and speech processing, no study has yet combined them to obtain a comprehensive picture of language abilities in a single session. Here, we propose a novel task‐free paradigm aimed at assessing multiple levels of speech processing by combining various speech and nonspeech sounds in an adaptation of a multifeature passive oddball design. We recorded EEG in healthy adult participants, who were presented with these sounds in the absence of sound‐directed attention while being engaged in a primary visual task. This produced a range of responses indexing various levels of sound processing and language comprehension: (a) P1‐N1 complex, indexing obligatory auditory processing; (b) P3‐like dynamics associated with involuntary attention allocation for unusual sounds; (c) enhanced responses for native speech (as opposed to nonnative phonemes) from ∼50 ms from phoneme onset, indicating phonological processing; (d) amplitude advantage for familiar real words as opposed to meaningless pseudowords, indexing automatic lexical access; (e) topographic distribution differences in the cortical activation of action verbs versus concrete nouns, likely linked with the processing of lexical semantics. These multiple indices of speech‐sound processing were acquired in a single attention‐free setup that does not require any task or subject cooperation; subject to future research, the present protocol may potentially be developed into a useful tool for assessing the status of auditory and linguistic functions in uncooperative or unresponsive participants, including a range of clinical or developmental populations.
Accelerating Research
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom
Address
John Eccles HouseRobert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom