z-logo
Premium
A new approach to reliability assessment of dental caries examinations
Author(s) -
Altarakemah Yacoub,
AlSane Mona,
Lim Sungwoo,
Kingman Albert,
Ismail Amid I.
Publication year - 2013
Publication title -
community dentistry and oral epidemiology
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 1.061
H-Index - 101
eISSN - 1600-0528
pISSN - 0301-5661
DOI - 10.1111/cdoe.12020
Subject(s) - medicine , dentistry , reliability (semiconductor) , orthodontics , power (physics) , quantum mechanics , physics
Objectives The objective of this study is to evaluate reliability of the I nternational C aries D etection and A ssessment S ystem ( ICDAS ) and identify sources of disagreement among eight K uwaiti dentists with no prior knowledge of the system. Methods A 90‐min introductory e‐course was introduced followed by an examination of extracted teeth using the ICDAS coding system on the first day. Then three sessions of clinical examinations were performed. This study only used the data from the last session where 705 tooth surfaces of 10 patients were examined to assess bias in caries examination and on which codes the examiners had the highest disagreement. Compared with the gold standard, we evaluated bias of the ICDAS coding using three approaches ( B land– A ltman plot, maximum kappa statistic, and B hapkar's chi‐square test). Linear weighted kappa statistics were computed to assess interexaminer reliability. Results Marginal ICDAS distributions for most examiners were significantly different from that of the gold standard (bias present). The primary source of these marginal differences was misclassifying sound surfaces as noncavitated lesions. Interexaminer reliability of the 3‐level ICDAS (codes 0, 1–2, and 3–6) classification ranged between 0.43 and 0.73, indicating evidence of substantial inconsistency between examiners. The primary source of examiner differences was agreeing on diagnosing noncavitated lesions. Conclusion This study highlights the importance of assessing both systematic and random sources of examiner agreement to correctly interpret kappa measures of reliability.

This content is not available in your region!

Continue researching here.

Having issues? You can contact us here