Premium
Career caseload predicts interobserver agreement on the final classification of a mammogram
Author(s) -
Abdelrahman Mostafa A,
Rawashdeh Mohammad A,
McEntee Mark,
Abu Tahoun Laila,
Brennan Patrick
Publication year - 2019
Publication title -
journal of medical imaging and radiation oncology
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.31
H-Index - 43
eISSN - 1754-9485
pISSN - 1754-9477
DOI - 10.1111/1754-9485.12859
Subject(s) - medicine , kappa , subspecialty , concordance , cohen's kappa , mammography , radiology , nuclear medicine , breast cancer , family medicine , cancer , statistics , philosophy , linguistics , mathematics
Differences in radiologists’ experience can potentially introduce interobserver variability in reading mammograms. This work investigated the effect of radiologists’ experience on agreement on mammographic final classification. Methods This was a cross‐sectional study. Seventeen radiologists were asked to provide their final impression on 60 mammogram cases. Experience parameters included breast subspecialty, years reading mammograms, cases read per year and career caseload. Career caseload was calculated by multiplying years reading mammograms by the average number of cases read per year. The interobserver agreement was calculated using Cohen kappa (κ). The difference in κ between radiologists’ groups was compared using the independent‐sample t ‐test and analysis of variance. Results The average interobserver agreement was 0.25 (fair). A small difference was found in favour of breast radiologists against general radiologists (κ = 0.21 and 0.29, respectively, P = 0.019). Years reading mammograms and cases read per year did not seem to significantly affect the interobserver agreement ( P = 0.056 and 0.273 respectively). Radiologist who had career caseload of at least 2500 cases showed significantly higher consistency than those who read less. κ for radiologists who had career caseload of 2500–4000 cases and >4000 cases was 0.33 and 0.28, respectively, whereas for <2500 κ was 0.17 ( P = 0.001). Conclusion A fair level of interobserver agreement on the final classification of a mammogram was demonstrated. Career caseload was the most important experience parameter to associate with the interobserver agreement. Training strategies aiming to increase radiologists’ career caseload may be beneficial.