Premium
Objective structured assessment of technical skills and checklist scales reliability compared for high stakes assessments
Author(s) -
Gallagher Anthony G.,
O'Sullivan Gerald C.,
Leonard Gerald,
Bunting Brendan P.,
McGlade Kieran J.
Publication year - 2012
Publication title -
anz journal of surgery
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.426
H-Index - 70
eISSN - 1445-2197
pISSN - 1445-1433
DOI - 10.1111/j.1445-2197.2012.06236.x
Subject(s) - checklist , medicine , inter rater reliability , competence (human resources) , reliability (semiconductor) , self assessment , medical education , medical physics , rating scale , statistics , psychology , social psychology , pedagogy , power (physics) , physics , mathematics , quantum mechanics , cognitive psychology
Background The establishment of assessment reliability at the level of the individual trainee is an important attribute of assessment methodologies, particularly for doctors who have been failed. This issue is of particular importance for the process of competence assessment in the USA , UK , A ustralia and N ew Z ealand. Methods We use data from 19 applicants for higher surgical training in 2008 at the R oyal C ollege of S urgeons in I reland to compare: (i) the objective structured assessment of technical skills ( OSATS ) method; and (ii) a procedure‐specific checklist to assess surgical technical skills in the excision of a sebaceous cyst task by two experienced senior surgeons. Results The overall interrater reliability ( IRR ) of the OSATS assessment as determined by a correlation coefficient was 0.507 ( P < 0.03) and 0.67 with coefficient alpha, considerably below the accepted 0.8 level of IRR . The checklist's overall IRR was 0.89. Individually, only five (26%) of the OSATS assessments reached the 0.8 level of IRR in contrast to 18 (95%) of the checklist assessments. Discussion We propose binary procedure‐based assessment checklists as more reliable assessment instruments with more robust reproducibility.