z-logo
Premium
Improved Reliability of Automated ASPECTS Evaluation Using Iterative Model Reconstruction from Head CT Scans
Author(s) -
Löffler Maximilian T.,
Sollmann Nico,
Mönch Sebastian,
Friedrich Benjamin,
Zimmer Claus,
Baum Thomas,
Maegerlein Christian,
Kirschke Jan S.
Publication year - 2021
Publication title -
journal of neuroimaging
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.822
H-Index - 64
eISSN - 1552-6569
pISSN - 1051-2284
DOI - 10.1111/jon.12810
Subject(s) - medicine , kappa , reliability (semiconductor) , image quality , iterative reconstruction , software , computed tomography , medical physics , inter rater reliability , radiology , nuclear medicine , algorithm , artificial intelligence , image (mathematics) , computer science , statistics , rating scale , mathematics , power (physics) , physics , geometry , quantum mechanics , programming language
BACKGROUND AND PURPOSE Iterative model reconstruction (IMR) has shown to improve computed tomography (CT) image quality compared to hybrid iterative reconstruction (HIR). Alberta Stroke Program Early CT Score (ASPECTS) assessment in early stroke is particularly dependent on high‐image quality. Purpose of this study was to investigate the reliability of ASPECTS assessed by humans and software based on HIR and IMR, respectively. METHODS Forty‐seven consecutive patients with acute anterior circulation large vessel occlusions (LVOs) and successful endovascular thrombectomy were included. ASPECTS was assessed by three neuroradiologists (one attending, two residents) and by automated software in noncontrast axial CT with HIR (iDose4; 5 mm) and IMR (5 and 0.9 mm). Two expert neuroradiologists determined consensus ASPECTS reading using all available image data including MRI. Agreement between four raters (three humans, one software) and consensus were compared using square‐weighted kappa ( κ ). RESULTS Human raters achieved moderate to almost perfect agreement ( κ  = .557‐.845) with consensus reading. The attending showed almost perfect agreement for 5 mm HIR ( κ HIR  = .845), while residents had mostly substantial agreements without clear trends across reconstructions. Software had substantial to almost perfect agreement with consensus, increasing with IMR 5 and 0.9 mm slice thickness ( κ HIR  = .751, κ IMR  = .777, and κ IMR0.9  = .814). Agreements inversely declined for these reconstructions for the attending ( κ HIR  = .845, κ IMR  = .763, and κ IMR0.9  = .681). CONCLUSIONS Human and software rating showed good reliability of ASPECTS across different CT reconstructions. Human raters performed best with the reconstruction algorithms they had most experience with (HIR for the attending). Automated software benefits from higher resolution with better contrasts in IMR with 0.9 mm slice thickness.

This content is not available in your region!

Continue researching here.

Having issues? You can contact us here