Premium
Infants' Use of Synchronized Visual Information to Separate Streams of Speech
Author(s) -
Hollich George,
Newman Rochelle S.,
Jusczyk Peter W.
Publication year - 2005
Publication title -
child development
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 3.103
H-Index - 257
eISSN - 1467-8624
pISSN - 0009-3920
DOI - 10.1111/j.1467-8624.2005.00866.x
Subject(s) - psychology , task (project management) , communication , loudness , speech recognition , computer vision , computer science , management , economics
In 4 studies, 7.5‐month‐olds used synchronized visual–auditory correlations to separate a target speech stream when a distractor passage was presented at equal loudness. Infants succeeded in a segmentation task (using the head‐turn preference procedure with video familiarization) when a video of the talker's face was synchronized with the target passage (Experiment 1, N =30). Infants did not succeed in this task when an unsynchronized (Experiment 2, N =30) or static (Experiment 3, N =30) face was presented during familiarization. Infants also succeeded when viewing a synchronized oscilloscope pattern (Experiment 4, N =26), suggesting that their ability to use visual information is related to domain‐general sensitivities to any synchronized auditory–visual correspondence.