z-logo
Premium
Inter‐Rater Agreement on the Memory‐for‐Designs Test
Author(s) -
McIVER DAVID,
McLAREN SUSAN A.,
PHILIP ALISTAIR E.
Publication year - 1973
Publication title -
british journal of social and clinical psychology
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 1.479
H-Index - 92
eISSN - 2044-8260
pISSN - 0007-1293
DOI - 10.1111/j.2044-8260.1973.tb00866.x
Subject(s) - test (biology) , inter rater reliability , statistics , reliability (semiconductor) , psychology , variance (accounting) , agreement , standard error , audiology , mathematics , rating scale , medicine , linguistics , paleontology , power (physics) , physics , accounting , philosophy , quantum mechanics , business , biology
Inter‐rater agreement on the Memory‐for‐Designs Test was examined by comparing the scores given by each of three independent raters on the test protocols of 50 psychiatric patients. The test reproductions ranged from those seen to be error‐free to those obtaining maximum error scores. Analysis of variance showed disagreement between raters in the scoring of six designs and yielded intra‐class correlations for individual designs varying from 0ṁ67 to 1ṁ00. Reliability estimates for total test score were above 0ṁ90, being similar to the findings of previous studies. In discussing the results it is suggested that these high correlations are misleading because of the very large number of error‐free reproductions obtained from a varied group of subjects such as those tested here. Disagreement between raters occurred on those designs with fewest error‐free reproductions. It is argued that agreement on degree of error on reproductions is more important in this test than simple agreement of an error/no error kind. To achieve this end the scoring criteria must be improved, or alternatively, a simpler system of scoring should be introduced.

This content is not available in your region!

Continue researching here.

Having issues? You can contact us here