z-logo
open-access-imgOpen Access
Children flexibly seek visual information to support signed and spoken language comprehension.
Author(s) -
Kyle MacDonald,
Virginia A. Marchman,
Anne Fernald,
Michael C. Frank
Publication year - 2020
Publication title -
journal of experimental psychology. general
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 2.521
H-Index - 161
eISSN - 1939-2222
pISSN - 0096-3445
DOI - 10.1037/xge0000702
Subject(s) - comprehension , gaze , american sign language , psycinfo , psychology , eye tracking , sign language , eye movement , spoken language , cognitive psychology , visual language , first language , joint attention , linguistics , computer science , developmental psychology , natural language processing , autism , artificial intelligence , philosophy , medline , neuroscience , political science , psychoanalysis , law
During grounded language comprehension, listeners must link the incoming linguistic signal to the visual world despite uncertainty in the input. Information gathered through visual fixations can facilitate understanding. But do listeners flexibly seek supportive visual information? Here, we propose that even young children can adapt their gaze and actively gather information for the goal of language comprehension. We present 2 studies of eye movements during real-time language processing, where the value of fixating on a social partner varies across different contexts. First, compared with children learning spoken English ( n = 80), young American Sign Language (ASL) learners ( n = 30) delayed gaze shifts away from a language source and produced a higher proportion of language-consistent eye movements. This result provides evidence that ASL learners adapt their gaze to effectively divide attention between language and referents, which both compete for processing via the visual channel. Second, English-speaking preschoolers ( n = 39) and adults ( n = 31) fixated longer on a speaker's face while processing language in a noisy auditory environment. Critically, like the ASL learners in Experiment 1, this delay resulted in gathering more visual information and a higher proportion of language-consistent gaze shifts. Taken together, these studies suggest that young listeners can adapt their gaze to seek visual information from social partners to support real-time language comprehension. (PsycInfo Database Record (c) 2020 APA, all rights reserved).

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here