z-logo
open-access-imgOpen Access
Adaptive Comparative Judgment in Graphics Applications and Education
Author(s) -
Scott Bartholomew,
Patrick Connolly
Publication year - 2018
Language(s) - English
Resource type - Conference proceedings
DOI - 10.18260/1-2--27537
Subject(s) - rubric , computer science , adaptation (eye) , rank (graph theory) , field (mathematics) , data science , variety (cybernetics) , animation , graphics , management science , mathematics education , artificial intelligence , psychology , engineering , computer graphics (images) , mathematics , combinatorics , neuroscience , pure mathematics
One of the fundamental advantages behind Adaptive Comparative Judging (ACJ) is that it is easier and more accurate to comparison judge a series of products, and to develop a rank order of achievement, than it is to score products using a more subjective method or rubric approach. Research in the field of comparative judging has shown very high levels of reliability and close correlations between traditional grading approaches and this assessment methodology. This assessment approach appears to be effective at varying levels of rigor and academic achievement. Studies have examined adaptive comparative judging techniques in academic areas such as writing/composition, science education, and geography instruction. The areas of design and technology have proven to be especially effective topics for ACJ assessment, and are of special interest to the authors. This introductory paper examines the fundamental principles of comparative judging and adaptive comparative judging, and discusses some of the most recent and relevant research on this topic. Key web-based ACJ tools and products are briefly reviewed—especially as they relate to academic settings. Applications in the areas of portfolio evaluation, graphics assessment, and peer critiquing are also explored. Adaptive comparative judging has proven to be a method or assessment tool that is relatively straightforward to learn for faculty, and somewhat easily applied to a wide variety of topics and assignment approaches. ACJ appears to have a promising future in design and graphics applications. The Problem with Open-ended Problems Open-ended problems, a hallmark of many academic areas, are commonly employed in classrooms as a means of eliciting creativity (Kimbell, 2007), challenging students (Katehi, Pearson, & Feder, 2009), and fostering interest and engagement (Neal, 2011). The ability to work in and with ill-structured scenarios is a highly-sought-after skill among today’s employers (Partnership, 2011; Resnick, Monroy-Hernandez, Rusk, Eastmond, Brennan, ... & Kafai, 2009). However, open-ended problems carry with them the burden of a very difficult assessment task for the teacher—the inherently large number of possible solutions, the myriad of steps for completion, and the space for creativity in each assignment has proven very difficult for teachers to assess with validity and reliability (Kimbell, 2007; Pollitt, 2004). More potential for creativity by students can lead to a “messier” assessment scenario for teachers as the potential for a widerange of solutions increases (Pollitt & Crisp, 2004). Movements towards rubrics, criterion-based approaches, and technology-enhanced methodologies have all been lauded as potential options to alleviate some of the difficulties inherent in open-ended problem solving (Kimbell, 2007, 2012a; Denson, Buelin, Lammi, & D’Amico, 2015; Schilling & Applegate, 2012). Despite these approaches there is still no consensus as to best approaches for assessing open-ended problems with validity, reliability, and efficiency (Pollitt, 2012a). Teacher/grader bias, subjectivity, and the many possible solutions inherent in open-ended problems make a purely rubric or criterion-based approach difficult to implement with fidelity—especially when more than one teacher/grader is involved in the assessment process (Bartholomew, 2017; Pollitt, 2004). Adaptive Comparative Judgment In 1927 Louis L. Thurstone published a paper on the law of comparative judgment. Thurstone (1927) argued that while humans have great difficulty making quality judgments with validity and reliability we are much more adept at making comparative judgments—judgments of quality between two items. In the early 2000s two researchers in the United Kingdom, Alastair Pollitt and Richard Kimbell, began to leverage Thurstone’s ideas of comparative judgment in an effort to alleviate some of the difficulty related to open-ended design problems (Kimbell, 2012a). Rather than using a rubric or criterion-guide to tally points for students’ assignments, a teacher would act as a judge and simply make a comparative judgment between two pieces of student work—essentially viewing two items and choosing the better of the two based on their own expertise and a predetermined rubric or criteria. As a teacher repeats this process—making comparative judgments between different pairs of items—each item of student work begins to rise or fall, in the overall rank-order, based on their performance. Over time pieces of student work gain a “win-loss” record; each time an item is chosen over another the piece of student work it counts as a “win,” while a “loss” stems from not being chosen when paired with another item (Pollitt, 2004, 2012a; 2012b). Recent advances in technology have facilitated the creation of multiple adaptive comparative judgment (ACJ) software applications and platforms. These systems are useful in facilitating and streamlining the ACJ process as teachers can simply view two items on a computer screen and choose the better of the two. Technology advancements have opened the door for multiple types of student work (i.e., images, documents, audio/video recordings) to be compared using ACJ. Starting with Kimbell and Pollitt’s work and moving forward ACJ has been piloted, tested, and refined over time and the algorithm which facilitates the judgments has been improved (Pollitt 2004, 2012b). Comparative judgment becomes “adaptive” as the algorithm and technology tools work towards pairings becoming increasingly refined—rather than random pairing items with similar “win-loss” records are compared with one another and the overall rank order is increasingly refined in terms of validity and reliability (Pollitt, 2004).

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here
Accelerating Research

Address

John Eccles House
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom