Premium
Temporal analysis of multimodal data to predict collaborative learning outcomes
Author(s) -
Olsen Jennifer K.,
Sharma Kshitij,
Rummel Nikol,
Aleven Vincent
Publication year - 2020
Publication title -
british journal of educational technology
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 1.79
H-Index - 95
eISSN - 1467-8535
pISSN - 0007-1013
DOI - 10.1111/bjet.12982
Subject(s) - computer science , modalities , dialog box , modality (human–computer interaction) , multimodal learning , gaze , tutor , artificial intelligence , machine learning , world wide web , social science , sociology , programming language
The analysis of multiple data streams is a long‐standing practice within educational research. Both multimodal data analysis and temporal analysis have been applied successfully, but in the area of collaborative learning, very few studies have investigated specific advantages of multiple modalities versus a single modality, especially combined with temporal analysis. In this paper, we investigate how both the use of multimodal data and moving from averages and counts to temporal aspects in a collaborative setting provides a better prediction of learning gains. To address these questions, we analyze multimodal data collected from 25 9–11‐year‐old dyads using a fractions intelligent tutoring system. Assessing the relation of dual gaze, tutor log, audio and dialog data to students' learning gains, we find that a combination of modalities, especially those at a smaller time scale, such as gaze and audio, provides a more accurate prediction of learning gains than models with a single modality. Our work contributes to the understanding of how analyzing multimodal data in temporal manner provides additional information around the collaborative learning process.