Premium
Investigating Variations in Instructor‐generated Feedback as a Mediating Factor for Student Learning
Author(s) -
Offerdahl Erika,
Boyer Jeff,
McConnell Melody,
Momsen Jennifer,
Salter Rachel,
Williams Kurt,
Wiltbank Lisa
Publication year - 2017
Publication title -
the faseb journal
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 1.709
H-Index - 277
eISSN - 1530-6860
pISSN - 0892-6638
DOI - 10.1096/fasebj.31.1_supplement.587.14
Subject(s) - formative assessment , active learning (machine learning) , psychology , mathematics education , class (philosophy) , process (computing) , experiential learning , computer science , artificial intelligence , operating system
Over 200 studies in undergraduate STEM have demonstrated the efficacy of active learning environments over lecture environments. Though generally efficacious, there have been documented examples of active learning environments that did NOT produce significant differences in student learning. In these cases, variability in instructor pedagogical expertise and enactment of active learning pedagogies were likely mediating factors. Though definitions of active learning vary, it is largely agreed that the two key features of active learning are (1) engagement of students in the learning process and (2) student construction of knowledge as opposed to transmission of information to students. Active learning pedagogies are also inherently rich in formative assessment and feedback; engaging students in the learning process involves creating opportunities for students to test their understanding and receive feedback about their progress in learning. Our overarching hypothesis is that variation in the efficacy of active learning interventions can be explained, in part, by variability in the implementation of formative assessment and feedback. A number of instruments allow for measuring variability in instructor (and student) behaviors in undergraduate science courses. One of these, the COPUS, can characterize instructors into one of ten different profiles on an active learning continuum. Movement along the continuum reveals a decrease in the amount of time lecturing and a concomitant increase in the amount of instructor following up on in‐class questions or activities (coded as FUp on the COPUS instrument). The COPUS (and associated profiles) is useful for characterizing the degree to which instructors are implementing components of active learning. But it does not measure more fine‐grained nuances in enactment. In particular, the COPUS does not identify when feedback (as opposed to other types of follow up) is being implemented nor does it allow for characterization of this feedback. We present a new classroom observation instrument to characterize variation in instructors' feedback practices. We identified video‐recorded teaching episodes with identical COPUS profiles to document variations in what instructors do after initiating a formative assessment cycle – what instructors do when they are coded as FUp. Our data reveal that the COPUS does detect important differences in instructor‐generated feedback. Notably, teaching episodes with similar amounts of FUp demonstrated statistically significant differences in the amount and quality of feedback provided. These results suggest that inferences about instruction based on COPUS profiles should be approached with caution – not all FUp is created equal. This is important in light of previous research showing that some feedback negatively impacts student learning (e.g. praise can decrease performance) and that differences in feedback preferentially impact different learners (e.g. delayed feedback is more effective for high achieving students). Support or Funding Information NSF DUE 1431891