Premium
Effects of prior knowledge and joint attention on learning from eye movement modelling examples
Author(s) -
Chisari Lucia B.,
Mockevičiūtė Akvilė,
Ruitenburg Sterre K.,
Vemde Lian,
Kok Ellen M.,
Gog Tamara
Publication year - 2020
Publication title -
journal of computer assisted learning
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 1.583
H-Index - 93
eISSN - 1365-2729
pISSN - 0266-4909
DOI - 10.1111/jcal.12428
Subject(s) - eye movement , eye tracking , task (project management) , synchronization (alternating current) , affect (linguistics) , computer science , cognitive psychology , knowledge of results , psychology , movement (music) , dynamics (music) , mechanism (biology) , artificial intelligence , communication , computer network , pedagogy , channel (broadcasting) , philosophy , management , epistemology , economics , aesthetics
Eye movement modelling examples (EMMEs) are instructional videos of a model's demonstration and explanation of a task that also show where the model is looking. EMMEs are expected to synchronize students' visual attention with the model's, leading to better learning than regular video modelling examples (MEs). However, synchronization is seldom directly tested. Moreover, recent research suggests that EMMEs might be more effective than ME for low prior knowledge learners. We therefore used a 2 × 2 between‐subjects design to investigate if the effectiveness of EMMEs (EMMEs/ME) is moderated by prior knowledge (high/low, manipulated by pretraining), applying eye tracking to assess synchronization. Contrary to expectations, EMMEs did not lead to higher learning outcomes than ME, and no interaction with prior knowledge was found. Structural equation modelling shows the mechanism through which EMMEs affect learning: Seeing the model's eye movements helped learners to look faster at referenced information, which was associated with higher learning outcomes.