Premium
‘What would my classmates say?’ An international study of the prediction‐based method of course evaluation
Author(s) -
SchönrockAdema Johanna,
Lubarsky Stuart,
Chalk Colin,
Steinert Yvonne,
CohenSchotanus Janke
Publication year - 2013
Publication title -
medical education
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 1.776
H-Index - 138
eISSN - 1365-2923
pISSN - 0308-0110
DOI - 10.1111/medu.12126
Subject(s) - course (navigation) , psychology , medical education , computer science , mathematics education , medicine , engineering , aerospace engineering
Objectives Traditional student feedback questionnaires are imperfect course evaluation tools, largely because they generate low response rates and are susceptible to response bias. Preliminary research suggests that prediction‐based methods of course evaluation ‐ in which students estimate their peers’ opinions rather than provide their own personal opinions ‐ require significantly fewer respondents to achieve comparable results and are less subject to biasing influences. This international study seeks further support for the validity of these findings by investigating: (i) the performance of the prediction‐based method, and (ii) its potential for bias. Methods Participants (210 Year 1 undergraduate medical students at McGill University, Montreal, Quebec, Canada, and 371 Year 1 and 385 Year 3 undergraduate medical students at the University Medical Center Groningen [UMCG], University of Groningen, Groningen, the Netherlands) were randomly assigned to complete course evaluations using either the prediction‐based or the traditional opinion‐based method. The numbers of respondents required to achieve stable outcomes were determined using an iterative process. Differences between the methods regarding the number of respondents required were analysed using t ‐tests. Differences in evaluation outcomes between the methods and between groups of students stratified by four potentially biasing variables (gender, estimated general level of achievement, expected test result, satisfaction after examination completion) were analysed using multivariate analysis of variance ( manova) . Results Overall response rates in the three student cohorts ranged from 70% to 94%. The prediction‐based method required significantly fewer respondents than the opinion‐based method (averages of 26–28 and 67–79 respondents, respectively) across all samples (p < 0.001), whereas the outcomes achieved were fairly similar. Bias was found in four of 12 opinion‐based condition comparisons (three sites, four variables), and in only one comparison in the prediction‐based condition. Conclusions Our study supports previous findings that prediction‐based methods require significantly fewer respondents to achieve results comparable with those obtained through traditional course evaluation methods. Moreover, our findings support the hypothesis that prediction‐based responses are less subject to bias than traditional opinion‐based responses. These findings lend credence to prediction‐based as an accurate and efficient method of course evaluation.