Premium
Development of knowledge tests for multi‐disciplinary emergency training: a review and an example
Author(s) -
SØRENSEN J. L.,
THELLESEN L.,
STRANDBYGAARD J.,
SVENDSEN K. D.,
CHRISTENSEN K. B.,
JOHANSEN M.,
LANGHOFFROOS P.,
EKELUND K.,
OTTESEN B.,
VAN DER VLEUTEN C.
Publication year - 2015
Publication title -
acta anaesthesiologica scandinavica
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.738
H-Index - 107
eISSN - 1399-6576
pISSN - 0001-5172
DOI - 10.1111/aas.12428
Subject(s) - content validity , face validity , construct validity , medicine , multiple choice , test (biology) , discipline , competence (human resources) , medical education , reliability (semiconductor) , validity , construct (python library) , psychometrics , psychology , clinical psychology , significant difference , social psychology , computer science , paleontology , social science , power (physics) , physics , quantum mechanics , sociology , biology , programming language
Background The literature is sparse on written test development in a post‐graduate multi‐disciplinary setting. Developing and evaluating knowledge tests for use in multi‐disciplinary post‐graduate training is challenging. The objective of this study was to describe the process of developing and evaluating a multiple‐choice question ( MCQ ) test for use in a multi‐disciplinary training program in obstetric‐anesthesia emergencies. Methods A multi‐disciplinary working committee with 12 members representing six professional healthcare groups and another 28 participants were involved. Recurrent revisions of the MCQ items were undertaken followed by a statistical analysis. The MCQ items were developed stepwise, including decisions on aims and content, followed by testing for face and content validity, construct validity, item–total correlation, and reliability. Results To obtain acceptable content validity, 40 out of originally 50 items were included in the final MCQ test. The MCQ test was able to distinguish between levels of competence, and good construct validity was indicated by a significant difference in the mean score between consultants and first‐year trainees, as well as between first‐year trainees and medical and midwifery students. Evaluation of the item–total correlation analysis in the 40 items set revealed that 11 items needed re‐evaluation, four of which addressed content issues in local clinical guidelines. A C ronbach's alpha of 0.83 for reliability was found, which is acceptable. Conclusion Content and construct validity and reliability were acceptable. The presented template for the development of this MCQ test could be useful to others when developing knowledge tests and may enhance the overall quality of test development.