
Peer Grading in a MOOC: Reliability, Validity, and Perceived Effects
Author(s) -
Hui Luo,
Anthony C. Robinson,
Jae Young Park
Publication year - 2014
Publication title -
online learning
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 1.182
H-Index - 49
eISSN - 2472-5749
pISSN - 2472-5730
DOI - 10.24059/olj.v18i2.429
Subject(s) - grading (engineering) , empirical research , massive open online course , computer science , psychology , mathematics education , world wide web , engineering , statistics , civil engineering , mathematics
Peer grading affords a scalable and sustainable way of providing assessment and feedback to a massive student population, and has been used in massive open online courses (MOOCs) on the Coursera platform. However, currently there is little empirical evidence to support the credentials of peer grading as a learning assessment method in the MOOC context. To address this research need, this study examined 1825 peer grading assignments collected from a Coursera MOOC with the purpose of investigating the reliability and validity of peer grading as well as its perceived effects on students’ MOOC learning experience. The empirical findings proved that the aggregate ratings of student graders can provide peer grading scores that were fairly consistent and highly similar to the instructor grading scores. Student responses to a survey also show that the peer grading activity was well received as the majority of MOOC students believed it was fair, useful, beneficial, and would recommend it to be included in future MOOC offerings. Based on the empirical results, this study concludes with a set of principles for designing and implementing peer grading activities in the MOOC context.