Premium
Interrater reliability of the original and a revised scoring system for the developmental test of visual‐motor integration
Author(s) -
Lepkin Sheila Ratsch,
Pryzwansky Walter B.
Publication year - 1983
Publication title -
psychology in the schools
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.738
H-Index - 75
eISSN - 1520-6807
pISSN - 0033-3085
DOI - 10.1002/1520-6807(198307)20:3<284::aid-pits2310200305>3.0.co;2-h
Subject(s) - inter rater reliability , psychology , scoring system , reliability (semiconductor) , test (biology) , ambiguity , developmental psychology , clinical psychology , applied psychology , rating scale , computer science , medicine , paleontology , power (physics) , physics , quantum mechanics , biology , surgery , programming language
This study investigated the interrater reliability of teachers' and school psychology ex‐terns' scoring of protocols for the Developmental Test of Visual‐Motor Integration (VMI). Previous studies suggest that the scoring criteria of the VMI are ambiguous, which when coupled with raters' lack of scoring experience, as well as limited knowledge of testing issues, contributes to low rater reliability. The original manual scoring system was used by four trained teachers with no VMI experience and by four experienced raters. A VMI scoring system, revised to eliminate ambiguous scoring criteria, was used by an additional four teachers inexperienced with the VMI and by four experienced raters. High reliability coefficients (>.90) were found for all raters, regardless of the scoring system employed. The influence on interrater reliability of factors such as training, nature of the training setting, characteristics of the raters, and ambiguity of scoring criteria is discussed.