Board # 147 : Go With Your Gut! – Using Low-Time-Investment Evaluations of Student Work for Identifying High versus Low Quality Responses
Author(s) -
Matthew Verleger
Publication year - 2018
Language(s) - English
Resource type - Conference proceedings
DOI - 10.18260/1-2--27766
Subject(s) - computer science , categorization , quality (philosophy) , matching (statistics) , set (abstract data type) , process (computing) , work (physics) , reliability (semiconductor) , instinct , data science , knowledge management , artificial intelligence , engineering , mechanical engineering , philosophy , statistics , power (physics) , physics , mathematics , epistemology , quantum mechanics , evolutionary biology , biology , programming language , operating system
Background Peer review can be a beneficial pedagogical tool for providing students both feedback and varied perspectives on their work. Despite being a valuable tool, the best mechanism for assigning reviewers to reviewees is still often blind random assignment. While better mechanisms must exist, they necessarily rely on having some prior knowledge about the work being reviewed. Purpose (Hypothesis) The purpose of this paper is to present the findings from an effort to classify student team performance on Model-Eliciting Activities (MEAs) using a trained reviewer’s gut instinct about the quality of the work. Design/Method MEAs are realistic, open-ended, client-driven engineering problems where teams of students produce a written document describing the steps of how to solve the problem. Using an archival data set, nearly 450 MEA solutions were evaluated by two trained student researchers in approximately two minutes per solution. Their evaluations are compared against other, more detailed, analyses of the solutions to identify if their evaluations are sufficiently accurate enough to use as baseline data for making peer review matching decisions with a comparatively miniscule investment in time. Results Results indicate that both researchers performed less accurately than computer-based classification but were largely consistent with the more detailed evaluations conducted by teaching assistants. Conclusion The conclusion this research made was that gut reaction based classification was not wholly sufficient to address the needs for informed peer review matching. The results may be useful as an additional data source for computer-based classification to reduce the amount of training or to increase accuracy.
Accelerating Research
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom
Address
John Eccles HouseRobert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom