Premium
Performance‐based assessment in continuing medical education for general practitioners: construct validity
Author(s) -
Jansen J J M,
Scherpbier A J J A,
Metz J C M,
Grol R P T M,
Vleuten C P M,
Rethans J J
Publication year - 1996
Publication title -
medical education
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 1.776
H-Index - 138
eISSN - 1365-2923
pISSN - 0308-0110
DOI - 10.1111/j.1365-2923.1996.tb00844.x
Subject(s) - competence (human resources) , test (biology) , construct validity , medical education , educational measurement , psychology , construct (python library) , achievement test , continuing medical education , predictive validity , medicine , psychometrics , standardized test , clinical psychology , continuing education , computer science , mathematics education , curriculum , social psychology , pedagogy , paleontology , biology , programming language
SUMMARY The use of performance‐based assessment has been extended to postgraduate education and practising doctors, despite criticism of validity. While differences in expertise at this level are easily reflected in scores on a written test, these differences are relatively small on performance‐based tests. However, scores on written tests and performance‐based tests of clinical competence generally show moderate correlations. A study was designed to evaluate construct validity of a performance‐based test for technical clinical skills in continuing medical education for general practitioners, and to explore the correlation between performance and knowledge of specific skills. A 1‐day skills training was given to 71 general practitioners, covering four different technical clinical skills. The effect of the training on performance was measured with a performance‐based test using a randomized controlled trial design, while the effect on knowledge was measured with a written test administered 1 month before and directly after the training. A training effect could be shown by the performance‐based test for all four clinical skills. The written test also demonstrated a training effect for all but one skill. However, correlations between scores on the written test and on the performance based test were low for all skills. It is concluded that construct validity of a performance‐based test for technical clinical skills of general practitioners was demonstrated, while the knowledge test score was shown to be a poor predictor of competence for specific technical skills.