z-logo
open-access-imgOpen Access
Automated and Scalable Assessment: Present and Future
Author(s) -
Edward F. Gehringer
Publication year - 2015
Language(s) - English
Resource type - Conference proceedings
DOI - 10.18260/p.23609
Subject(s) - scalability , computer science , grading (engineering) , revenue , peer assessment , analytics , multimedia , the internet , world wide web , data science , mathematics education , psychology , engineering , database , civil engineering , accounting , business
A perennial problem in teaching is securing enough resources to adequately assess student work. In recent years, tight budgets have constrained the dollars available to hire teaching assistants. Concurrent with this trend, the rise of MOOCs, has raised assessment challenges to a new scale. In MOOCs, it’s necessary to get feedback to, and assign grades to, thousands of students who don’t bring in any revenue. As MOOCs begin to credential students, accurate assessment will become even more important. These two developments have created an acute need for automated and scalable assessment mechanisms, to assess large numbers of students without a proportionate increase in costs. There are four main approaches to this kind of assessment: autograding, constructed-response analysis, automated essay scoring, and peer review. This paper examines the current status of these approaches, and surveys new research on combinations of these approaches to produce more reliable grading.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here
Accelerating Research

Address

John Eccles House
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom