Premium
The modality effect in a mobile learning environment: Learning from spoken text and real objects
Author(s) -
Liu TzuChien,
Lin YiChun,
Gao Yuan,
Paas Fred
Publication year - 2019
Publication title -
british journal of educational technology
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 1.79
H-Index - 95
eISSN - 1467-8535
pISSN - 0007-1013
DOI - 10.1111/bjet.12605
Subject(s) - modality (human–computer interaction) , comprehension , computer science , test (biology) , mode (computer interface) , multimedia , mathematics education , mobile device , cognitive psychology , natural language processing , psychology , artificial intelligence , human–computer interaction , world wide web , paleontology , biology , programming language
The finding that under split‐attention conditions students learn more from a picture and spoken text than from a picture and written text (ie, the modality effect) has consistently been found in many types of computer‐assisted multimedia learning environments. Using 58 fifth‐grade and sixth‐grade elementary school children as participants, we investigated whether the modality effect can also be found in a mobile learning environment (MLE) on plants' leaf morphology, in which students had to learn by integrating information from text and real plants in the physical environment. A single factor experimental design was used to examine the hypothesis that students in a mixed‐mode condition with real plants and spoken text (STP condition) would pay more attention to the real plants, and achieve higher performance on retention, comprehension, and transfer tests than the single‐mode condition with real plants and written text (WTP condition). Whereas we found that participants in the STP condition paid more attention to observing the plants, and achieved a higher score on the transfer test than participants in the WTP condition, no differences were found between the conditions for retention and comprehension test performance.