
An Instrument for Measuring Critical Appraisal Self‐Efficacy in Rheumatology Trainees
Author(s) -
Aizer Juliet,
Abramson Erika L.,
Berman Jessica R.,
Paget Stephen A.,
Frey Marianna B.,
Cooley Victoria,
Li Ying,
Hoffman Katherine L.,
Schell Julie A.,
Tiongson Michael D.,
Lin Myriam A.,
Mandl Lisa A.
Publication year - 2023
Publication title -
acr open rheumatology
Language(s) - English
Resource type - Journals
ISSN - 2578-5745
DOI - 10.1002/acr2.11505
Subject(s) - critical appraisal , cronbach's alpha , confidence interval , psychology , discriminant validity , odds ratio , clinical psychology , medicine , competence (human resources) , internal consistency , physical therapy , psychometrics , alternative medicine , social psychology , pathology
Objective Self‐efficacy, the internal belief that one can perform a specific task successfully, influences behavior. To promote critical appraisal of medical literature, rheumatology training programs should foster both competence and self‐efficacy for critical appraisal. This study aimed to investigate whether select items from the Clinical Research Appraisal Inventory (CRAI), an instrument measuring clinical research self‐efficacy, could be used to measure critical appraisal self‐efficacy (CASE). Methods One hundred twenty‐five trainees from 33 rheumatology programs were sent a questionnaire that included two sections of the CRAI. Six CRAI items relevant to CASE were identified a priori; responses generated a CASE score (total score range 0‐10; higher = greater confidence in one's ability to perform a specific task successfully). CASE scores' internal structure and relation to domain‐concordant variables were analyzed. Results Questionnaires were completed by 112 of 125 (89.6%) trainees. CASE scores ranged from 0.5 to 8.2. The six CRAI items contributing to the CASE score demonstrated high internal consistency (Cronbach's α = 0.95) and unidimensionality. Criterion validity was supported by the findings that participants with higher CASE scores rated their epidemiology and biostatistics understanding higher than that of peers ( P < 0.0001) and were more likely to report referring to studies to answer clinical questions (odds ratio 2.47, 95% confidence interval 1.41‐4.33; P = 0.002). The correlation of CASE scores with percentage of questions answered correctly was only moderate, supporting discriminant validity. Conclusion The six‐item CASE instrument demonstrated content validity, internal consistency, discriminative capability, and criterion validity, including correlation with self‐reported behavior, supporting its potential as a useful measure of critical appraisal self‐efficacy.