Premium
The quality of a simulation examination using a high‐fidelity child manikin
Author(s) -
Tsai TC,
Harasym P H,
NijssenJordan C,
Jennett P,
Powell G
Publication year - 2003
Publication title -
medical education
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 1.776
H-Index - 138
eISSN - 1365-2923
pISSN - 0308-0110
DOI - 10.1046/j.1365-2923.37.s1.3.x
Subject(s) - construct validity , test (biology) , face validity , competence (human resources) , fidelity , concordance , reliability (semiconductor) , psychology , inter rater reliability , rating scale , medical physics , computer science , applied psychology , medicine , psychometrics , clinical psychology , social psychology , developmental psychology , paleontology , telecommunications , power (physics) , physics , quantum mechanics , biology
Purpose Developing quality examinations that measure physicians' clinical performance in simulations is difficult. The goal of this study was to develop a quality simulation examination using a high‐fidelity child manikin in evaluating paediatric residents' competence about managing critical cases in a simulated emergency room. Quality was determined by evidence of the reliability, validity and feasibility of the examination. In addition, the participants' responses regarding its realism, effectiveness and value are presented. Method Scenario scripts and rating instruments were carefully developed in this study. Experts were used to validate the case scenarios and provide evidence of construct validity. Eighteen paediatric residents, ‘working’ as pairs, participated in a manikin‐based simulation pre‐test, a training session and a post‐test. Three independent raters rated the participants' performance on task‐specific technical skills, medications used and behaviours displayed. At the end of the simulation, the participants completed an evaluation questionnaire. Results The manikin‐based simulation examination was found to be a realistic, valid and reliable tool. Validity (i.e. face, content and construct) of the test instrument was evident. The level of inter‐rater concordance of participants' clinical performance was good to excellent. The item analysis showed good to excellent internal consistency on all the performance scores except the post‐test technical score. Conclusions With a carefully designed rating instrument and simulation operation, the manikin‐based simulation examination was shown to be reliable and valid. However, a further refinement of the test instrument will be required for higher stake examinations.