z-logo
open-access-imgOpen Access
Reliability of the Analytic Rubric and Checklist for the Assessment of Story Writing Skills: G and Decision Study in Generalizability Theory
Author(s) -
Nezaket Bilge Uzun,
Devrim Alıcı,
Mehtap Aktaş
Publication year - 2018
Publication title -
european journal of educational research
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.319
H-Index - 9
ISSN - 2165-8714
DOI - 10.12973/eu-jer.8.1.169
Subject(s) - generalizability theory , rubric , checklist , psychology , reliability (semiconductor) , peer assessment , variance (accounting) , task (project management) , inter rater reliability , variance components , mathematics education , facet (psychology) , statistics , applied psychology , social psychology , mathematics , cognitive psychology , developmental psychology , rating scale , power (physics) , physics , accounting , quantum mechanics , business , management , personality , economics , big five personality traits
The purpose of study is to examine the reliability of analytical rubrics and checklists developed for the assessment of story writing skills by means of generalizability theory. The study group consisted of 52 students attending the 5th grade at primary school and 20 raters in Mersin University. The G study was carried out with the fully crossed hxpxg (story x rater x performance task) design, where the scoring keys were determined as fix facet. Decision Study was carried out by changing the task facet conditions. As a result, it was observed in both scoring keys that the sources of variance related to the stories had a high variance percentage in the main effects while "hp (story and rater interaction effects)" a high variance percentage in the interaction effects. The highest variance in the design belongs to the interaction effect "hpg (story, rater and performance task interaction effects)". This can be an indicator for the existence of different sources of variability and error, which are not included in the design. Examining the G and phi coefficients calculated for both scoring keys, it was determined that scoring with analytic rubrics is more reliable and generalizable. According to the decision studies, it was decided that the number of tasks used in this study is to be most appropriate.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here
Accelerating Research

Address

John Eccles House
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom