z-logo
open-access-imgOpen Access
A Laboratory Study of Student Usage of Worked-example Videos to Support Problem Solving
Author(s) -
Edward Berger,
Michael W. Wilson
Publication year - 2016
Language(s) - English
Resource type - Conference proceedings
DOI - 10.18260/p.26342
Subject(s) - computer science , task (project management) , process (computing) , cognitive load , multimedia , perception , mathematical problem , mathematics education , cognition , psychology , management , neuroscience , economics , operating system
Despite the commonplace usage of video resources for engineering instruction, an understanding of precisely how students use such videos to support their problem solving and learning is incomplete. Researchers generally find that both students and faculty like using instructional videos (if ‘well constructed’), especially in the format of so-called ‘worked examples’ in which an expert records a problem solution for learner consumption. Cognitive load theory (CLT) has successfully affirmed instructional worked-example interventions as more effective and efficient than problem solving in novice-phase skill acquisition. However, most worked-example studies look at pre/post performance on problem solving in which the worked-example is the intervention, rather than studying student use of the worked-example itself in great detail. This study begins to address the gap in our understanding of how students use worked-example videos to support their problem solving. In this laboratory-based research, we studied problem-solving processes of a group of 24 students enrolled in a required mechanical engineering sophomorelevel course. In the experiment, students were presented a dynamics problem to be solved, provisioned with an equation sheet, an online calculator, and a video described as ‘potentially useful.’ Real-time data about student problem solving process and use of the video was captured via a Livescribe smartpen and a Mirametrix eye-gaze capture system (which captured their interactions with the video). Preand post-surveys about student attitudes about technology, perceptions of task difficulty, and academic transcript information are also included in the data set. Experimental videos and transcripts were coded for themes, and data about both task efficiency and task performance were extracted from the experimental evidence. Taken together, the results suggest that student usage of video resources can be broadly described by several archetypes, although in this study successful problem solution was possible regardless of archetype. These results will continue to inform academic coaching of students in our classes about optimal use of video resources. Introduction Assessments in sophomore-level mechanical engineering courses such as statics, dynamics, and thermodynamics, often emphasize problem solving, and indeed instruction is usually oriented around problem solving approaches and examples. In the last 10 years, instructional supports in the form of worked-example videos have become quite common, for two reasons. First, authoring tools for video creation continue to increase in power and ease of use, while simultaneously dropping in price. Second, the research on the worked-example effect continues to support the notion that video-based worked examples can be effective instructional supports for novice learners. The coalescence of these two factors has led to the ubiquity of instructional videos available online across a huge range of topics. The research on the effectiveness of worked examples is persuasive. The worked-example effect is a learning effect predicted by cognitive load theory (CLT) . Worked-examples are among the strongly-guided instructional strategies that reduce cognitive load in novices who learn by observing experts solving problems. When used as part of instruction, worked-examples, compared to many other techniques, improve learning during skill acquisition. The cognitive loads within a learners’ working memory are induced by tasks, performance, and the mental effort invested. Using well-structured multimedia-oriented instructional designs can reduce learners’ extraneous cognitive loads. Furthermore, learners use separate processing systems to process either visual (pictures) or auditory (verbal) representations of information. Mayer developed a cognitive theory for multimedia learning, and it emphasizes clean design, complementary aural and visual information, careful attention to cognitive load of the learner. The model also demonstrates the influence that learning motivation and cognitive load, during the learning process, have on performance. Right now, there is no widely agreed-upon approach to measuring cognitive load in an experimental environment. The four common methods presented in the literature all have their affordances and drawbacks. There are a variety of indirect measures related to, for instance, task performance, although they often suffer from confounding factors such as the experience of the learner and are therefore challenging to interpret. There are also secondary task designs in which response time to an external stimulus (say, a request to click an on-screen button during completion of a load-inducing task), but their experimental design can be complication. There is a huge range of physiological indicators as well, including heart rate and neural activity/EEG, but these approaches suffer from complicated experimental designs and (in some cases) prohibitive cost. One other approach, the one we use in this research, is a post-test subjective rating scale. In particular, we use a modified version of the NASA-TLX task load index that is easy to use, easy for test subjects to complete, and requires very little time. When balanced against the already complex experimental design used here (as described later), this brief and convenient measurement of workload was the best choice for our work. The NASATLX was initially developed for a broad range of tasks, including physically-intensive tasks. We have removed TLX items related to physical exertion, but have otherwise used the entire instrument as it was originally developed. While the effectiveness of worked examples has been established, we currently do not fully understand exactly how students integrate worked examples into their study practices. Prior studies have largely viewed the use of worked examples as an intervention to support learning, with the metrics of the study related to pre-/postgains in understanding or ability. We are interested in the details of how students use worked examples to solve problems, and there exists a gap in our current understanding of this facet of worked-example instruction. This gap in the literature inspires the broader research we are conducting, as well as the specific research questions considered in this paper: • RQ1: what are the necessary components of a laboratory experiment designed to probe student usage of worked examples in support of problem solving? Working hypothesis: we expect that real-time, video-based data—supplemented with preand post-surveys— will yield the most persuasive evidence about worked example use. • RQ2: to what extent do key metrics derived from the experiment predict academic performance on the example problem, or in the corresponding class? Working hypothesis: student usage of worked examples falls into several archetypes (i.e., usage patterns), but success in the experiment or in the corresponding class is possible regardless of workedexample usage archetype. This paper describes findings from our first set of experiments designed to answer these two research questions.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here
Accelerating Research

Address

John Eccles House
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom