z-logo
Premium
Intra‐ and inter‐examiner agreement when assessing radiographic implant bone levels: Differences related to brightness, accuracy, participant demographics and implant characteristics
Author(s) -
Walton Terry R.,
Layton Danielle M.
Publication year - 2018
Publication title -
clinical oral implants research
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 2.407
H-Index - 161
eISSN - 1600-0501
pISSN - 0905-7161
DOI - 10.1111/clr.13290
Subject(s) - kappa , radiography , demographics , medicine , thread (computing) , cohen's kappa , brightness , implant , nuclear medicine , orthodontics , dentistry , surgery , mathematics , statistics , computer science , optics , demography , physics , geometry , sociology , operating system
Objectives Evaluate intra‐ and inter‐examiner agreement of radiographic marginal bone level (MBL) assessment around Brånemark single implants; and whether agreement related to radiograph brightness, discrimination level (accuracy), participant demographics or implant characteristics. Materials and Methods Seventy‐four participants assessed MBLs of 100 digital radiographs twice with normal brightness, and twice with increased brightness. Intra‐examiner agreement with and without increased brightness to the same thread, and within one thread; and inter‐examiner agreement as compared with the group (defined by the mode) for the first assessments with and without increased brightness, to the same thread, and within one thread were calculated with Cohen's Kappa. Relationships between agreement, thread discrimination level (accuracy), brightness, participant and implant characteristics were explored. Results When assessing 100 “Normal” radiographs twice, a participant on average assessed 24% differently to themselves (poor intra‐examiner agreement, median Kappa 0.58, range 0.21–0.82); and 28% differently to other participants (poor inter‐examiner agreement, median Kappa 0.53, range 0.05–0.80). Agreement within examiners improved when radiographs were “Bright” (median Kappa 0.58 vs. 0.62, p  < 0.001, accuracy to same thread; median Kappa 0.94 vs. 0.96, p  < 0.001, accuracy within one thread). Agreement between examiners was neither better nor worse when radiographs were “Bright” (median Kappa 0.53 vs. 0.55, p  = 0.64, accuracy to same thread; median Kappa 0.93 vs. 0.93, p  = 0.23, accuracy within one thread). Intra‐ and inter‐examiner agreements were lower when accuracy to the same thread was required ( p  < 0.001, p  < 0.001). Neither intra‐ nor inter‐examiner agreement related to age, time since graduation, specialty, viewing device, implant experience, external hex familiarity, periimplantitis treatment experience, implant location or width ( p ‐values 0.05–0.999). Intra‐examiner agreement increased across dental assistants ( n  = 11), general dentists ( n  = 16) and specialists ( n  = 47) (“Bright” assessments, p  = 0.045, median Kappa's 0.55, 0.60, 0.65 respectively); and for females ( n  = 8, males = 58) (“Normal” assessments, p  = 0.019, median 0.68 vs. 0.55), but female numbers were low. Conclusions Agreement within and between examiners when assessing MBLs was poor. Disagreement occurred around 25% of the time, potentially affecting consistent disease assessments. No participant or implant characteristic clearly affected agreement. Brighter radiographs improved intra‐examiner agreement. Overall, perceived MBL changes below 1 mm are likely due to human, not biological variation.

This content is not available in your region!

Continue researching here.

Having issues? You can contact us here