z-logo
Premium
Evaluation of a Structured Application Assessment Instrument for Assessing Applications to Canadian Postgraduate Training Programs in Emergency Medicine
Author(s) -
Bandiera Glen,
Regehr Glenn
Publication year - 2003
Publication title -
academic emergency medicine
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 1.221
H-Index - 124
eISSN - 1553-2712
pISSN - 1069-6563
DOI - 10.1111/j.1553-2712.2003.tb00041.x
Subject(s) - inter rater reliability , medicine , cronbach's alpha , reliability (semiconductor) , medical physics , curriculum , cohort , medical education , psychometrics , statistics , psychology , rating scale , pathology , clinical psychology , pedagogy , power (physics) , physics , mathematics , quantum mechanics
Objective: To determine the interrater reliability and predictive validity of a structured instrument for assessing applications submitted to a Fellow of the Royal College of Physicians of Canada (FRCP) emergency medicine residency program. Methods: An application assessment instrument was derived based on faculty and resident input, institutional and national documents, and previous protocols. The instrument provided a score based on objective anchors for each of four application components. Three assessors were introduced to the instrument in a detailed tutorial session. Assessors were given five applications to score and results were compared for understanding of the scoring principles. The instrument was used in a developmental pilot to assess the 2001 cohort of applications and revised again. Applications for the 2002 study cohort were submitted through a central application service. Assessors used the instrument to score each application independently. Interrater reliability was determined by calculating a two‐way mixed‐effect Cronbach's alpha. Results: Forty applications were received for the year 2002. Thirty‐eight application packages were complete and data collection was complete for all 38. The single‐rater reliabilities for the curriculum vitae, personal letter, transcript, reference letters, and overall package were 0.73, 0.52, 0.64, 0.61, and 0.72, respectively. The three‐rater reliabilities for the components were 0.89, 0.77, 0.84, and 0.82, respectively. The three‐rater reliability of the overall application score was 0.89. Conclusions: Three‐rater reliabilities for each component and the entire application package were high. Multiple assessors are required to generate acceptable reliabilities. Using strict design and implementation principles can lead to a reliable instrument for assessing complex application packages.

This content is not available in your region!

Continue researching here.

Having issues? You can contact us here