z-logo
Premium
Assessment in general practice: the predictive value of written‐knowledge tests and a multiple‐station examination for actual medical performance in daily practice
Author(s) -
Ram Paul,
Vleuten Cees van der,
Rethans JanJoost,
Schouten Berna,
Hobma Sjoerd,
Grol Richard
Publication year - 1999
Publication title -
medical education
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 1.776
H-Index - 138
eISSN - 1365-2923
pISSN - 0308-0110
DOI - 10.1046/j.1365-2923.1999.00280.x
Subject(s) - test (biology) , predictive value , variance (accounting) , predictive validity , medicine , objective structured clinical examination , psychology , statistics , medical education , mathematics , clinical psychology , paleontology , accounting , business , biology
This study compares the predictive values of written‐knowledge tests and a standardized multiple‐station examination for the actual medical performance of general practitioners (GPs) in order to select effective assessment methods to be used in quality‐improvement activities. A comprehensive assessment was performed in four phases. First, 100 GPs from the southern part of the Netherlands were assessed by a general medical knowledge test and by a knowledge test on technical skills. Second, in order to check for time‐order effects, participants were randomly divided into two groups of 50 each, comparable on scores of both knowledge tests and on professional characteristics. Finally, both groups went through a multiple station examination using standardized patients and a practice video assessment of real surgery, but in opposite orders. Consultations were videotaped and assessed by well‐trained peer observers. The drop‐out rate was 10%. In both groups the predictive value of medical knowledge tests, ranging from 0·43 to 0·56 (Pearson correlation disattenuated), proved to be comparable with the predictive value of the multiple‐station examination for actual performance (0·33–0·59). The overall explained variance of scores of the practice video assessment, measured by multiple regression analysis with performance scores as dependent variables and scores on the knowledge tests and the multiple‐station examination as independent variables was moderate (19%). A time‐order effect showed in only one direction: from practice video assessment to the multiple‐station examination. The GP's professional characteristics did not contribute to the explanation of variation in performance. Medical knowledge tests can predict actual clinical performance to the same extent as a multiple‐station examination. Compared with a station examination, a knowledge test may be a good alternative method for assessment the procedures of a large number of practising GPs.

This content is not available in your region!

Continue researching here.

Having issues? You can contact us here