z-logo
Premium
Challenges to Informed Peer Review Matching Algorithms
Author(s) -
Verleger Matthew,
DiefesDux Heidi,
Ohland Matthew W.,
BesterfieldSacre Mary,
Brophy Sean
Publication year - 2010
Publication title -
journal of engineering education
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 3.896
H-Index - 108
eISSN - 2168-9830
pISSN - 1069-4730
DOI - 10.1002/j.2168-9830.2010.tb01070.x
Subject(s) - random assignment , matching (statistics) , algorithm , computer science , peer feedback , sample (material) , psychology , mathematics education , mathematics , statistics , chemistry , chromatography
B ackground Peer review is a beneficial pedagogical tool. Despite the abundance of data instructors often have about their students, most peer review matching is by simple random assignment. In fall 2008, a study was conducted to investigate the impact of an informed algorithmic assignment method, called Un‐weighted Overall Need (UON), in a course involving Model‐Eliciting Activities (MEAs). The algorithm showed no statistically significant impact on the MEA Final Response scores. A study was then conducted to examine the assumptions underlying the algorithm. P urpose (H ypothesis ) This research addressed the question: To what extent do the assumptions used in making informed peer review matches (using the Un‐weighted Overall Need algorithim) for the peer review of solutions to Model‐Eliciting Activities decay? D esign / method An expert rater evaluated the solutions of 147 teams' responses to a particular implementation of MEAs in a first‐year engineering course at a large mid‐west research university. The evaluation was then used to analyze the UON algorithm's assumptions when compared to a randomly assigned control group. R esults Weak correlation was found in the five UON algorithm's assumptions: students complete assigned work, teaching assistants can grade MEAs accurately, accurate feedback in peer review is perceived by the reviewed team as being more helpful than inaccurate feedback, teaching assistant scores on the first draft of an MEA can be used to accurately predict where teams will need assistance on their second draft, and the error a peer review has in evaluating a sample MEA solution is an accurate indicator of the error they will have while subsequently evaluating a real team's MEA solution. C onclusions Conducting informed peer review matching requires significant alignment between evaluators and experts to minimize deviations from the algorithm's designed purpose.

This content is not available in your region!

Continue researching here.

Having issues? You can contact us here