z-logo
Premium
Evaluating Students’ Performance in Responding to Art: The Development and Validation of an Art Criticism Assessment Rubric
Author(s) -
Tam Cheung On
Publication year - 2018
Publication title -
international journal of art and design education
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.312
H-Index - 25
eISSN - 1476-8070
pISSN - 1476-8062
DOI - 10.1111/jade.12154
Subject(s) - rubric , originality , criticism , art criticism , context (archaeology) , curriculum , psychology , visual arts education , mathematics education , creativity , pedagogy , social psychology , art , visual arts , performance art , literature , the arts , paleontology , biology , art history
This article reports on the development and validation of a rubric for assessing students’ written responses to artworks. Since the implementation of the Hong Kong New Senior Secondary Curriculum in 2009, art educators have seen responding to artworks as increasingly important. In this context, the Art Criticism Assessment Rubric ( ACAR ) was developed. On the basis of Feldman's and Geahigan's theories of art criticism, eight evaluation criteria were identified. The inter‐rater reliability ( IRR ) of the ACAR was examined. A preliminary IRR test was conducted and an excellent intra‐class correlation coefficient ( ICC ) value of .91 was obtained. For the main study, six independent raters, who were divided into three groups of two, were trained and invited to rate 87 art criticism essays written by students from eight secondary schools. Most dimensions of the ACAR achieved good ICC values. The results show that the ACAR is an acceptable rubric for providing a reliable assessment of students’ written responses to artworks. However, two dimensions, Originality and Balanced Views and Application of Aesthetic and Contextual Knowledge, obtained poor ICC values. This may be owing to the lack of consensus on the definition of originality and the raters' unfamiliarity with the concept of aesthetic knowledge. The researchers suggest that dimension‐specific samples rated from high to low scores should be provided in raters’ training.

This content is not available in your region!

Continue researching here.

Having issues? You can contact us here