Premium
Key‐feature questions for assessment of clinical reasoning: a literature review
Author(s) -
Hrynchak Patricia,
Glover Takahashi Susan,
Nayer Marla
Publication year - 2014
Publication title -
medical education
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 1.776
H-Index - 138
eISSN - 1365-2923
pISSN - 0308-0110
DOI - 10.1111/medu.12509
Subject(s) - key (lock) , feature (linguistics) , medline , psychology , computer science , management science , medical education , data science , medicine , political science , linguistics , engineering , philosophy , law , computer security
Objectives Key‐feature questions ( KFQ s) have been developed to assess clinical reasoning skills. The purpose of this paper is to review the published evidence on the reliability and validity of KFQ s to assess clinical reasoning. Methods A literature review was conducted by searching MEDLINE (1946–2012) and EMBASE (1980–2012) via OVID and ERIC . The following search terms were used: key feature; question or test or tests or testing or tested or exam; assess or evaluation, and case‐based or case‐specific. Articles not in English were eliminated. Results The literature search resulted in 560 articles. Duplicates were eliminated, as were articles that were not relevant; nine articles that contained reliability or validity data remained. A review of the references and of citations of these articles resulted in an additional 12 articles to give a total of 21 for this review. Format, language and scoring of KFQ examinations have been studied and modified to maximise reliability. Internal consistency reliability has been reported as being between 0.49 and 0.95. Face and content validity have been shown to be moderate to high. Construct validity has been shown to be good using vector thinking processes and novice versus expert paradigms, and to discriminate between teaching methods. The very modest correlations between KFQ examinations and more general knowledge‐based examinations point to differing roles for each. Importantly, the results of KFQ examinations have been shown to successfully predict future physician performance, including patient outcomes. Conclusions Although it is inaccurate to conclude that any testing format is universally reliable or valid, published research supports the use of examinations using KFQ s to assess clinical reasoning. The review identifies areas of further study, including all categories of evidence. Investigation into how examinations using KFQ s integrate with other methods in a system of assessment is needed.