z-logo
Premium
The reliability of assessment criteria for undergraduate medical students' communication skills portfolios: the Nottingham experience
Author(s) -
Rees Charlotte E,
Sheard Charlotte E
Publication year - 2004
Publication title -
medical education
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 1.776
H-Index - 138
eISSN - 1365-2923
pISSN - 0308-0110
DOI - 10.1111/j.1365-2923.2004.01744.x
Subject(s) - summative assessment , intraclass correlation , inter rater reliability , reliability (semiconductor) , psychology , cohen's kappa , kappa , portfolio , medical education , applied psychology , statistics , psychometrics , formative assessment , mathematics education , medicine , rating scale , clinical psychology , mathematics , developmental psychology , power (physics) , physics , geometry , quantum mechanics , financial economics , economics
Introduction  Some educators have argued that portfolios should not be assessed summatively because there is little evidence supporting the reliability of their assessment. This study aims to determine the reliability of assessment criteria used for a portfolio at the University of Nottingham. Methods  Two independent analysts assessed a random sample of portfolios ( n  = 100, 49.5%) using criterion‐referenced assessment. Students' performances were examined against subjective items in five areas: 1) portfolio structure, 2) level of critical reflection, 3) level of skills development, 4) use of documentary evidence, and 5) use of relevant literature. These subjective judgements were later converted into quantitative scales ranging from 0 to 3 so that interrater reliability could be established. The level of agreement between the two analysts for the total percentage score was established using an intraclass correlation coefficient and for the individual items using weighted kappa coefficients. Results  The level of agreement between the two raters for the total percentage score was 0.771 (95% CI = 0.678, 0.840), as measured by an intraclass correlation coefficient. The levels of agreement between the two raters for the individual items of the assessment criteria ranged from κ = 0.359 (item 3) to κ = 0.693 (item 4). Discussion  This study provides some support for the summative assessment of portfolios. The findings suggest that discussion and negotiation between independent assessors can enhance the reliability of assessment criteria. Therefore, medical educators are encouraged to use such procedures in the summative assessment of portfolios.

This content is not available in your region!

Continue researching here.

Having issues? You can contact us here