z-logo
Premium
Improved Quantification of Myocardium Scar in Late Gadolinium Enhancement Images: Deep Learning Based Image Fusion Approach
Author(s) -
Fahmy Ahmed S.,
Rowin Ethan J.,
Chan Raymond H.,
Manning Warren J.,
Maron Martin S.,
Nezafat Reza
Publication year - 2021
Publication title -
journal of magnetic resonance imaging
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 1.563
H-Index - 160
eISSN - 1522-2586
pISSN - 1053-1807
DOI - 10.1002/jmri.27555
Subject(s) - segmentation , convolutional neural network , magnetic resonance imaging , medicine , artificial intelligence , deep learning , computer science , steady state free precession imaging , cardiac magnetic resonance , nuclear medicine , image fusion , pattern recognition (psychology) , radiology , image (mathematics)
Background Quantification of myocardium scarring in late gadolinium enhanced (LGE) cardiac magnetic resonance imaging can be challenging due to low scar‐to‐background contrast and low image quality. To resolve ambiguous LGE regions, experienced readers often use conventional cine sequences to accurately identify the myocardium borders. Purpose To develop a deep learning model for combining LGE and cine images to improve the robustness and accuracy of LGE scar quantification. Study Type Retrospective. Population A total of 191 hypertrophic cardiomyopathy patients: 1) 162 patients from two sites randomly split into training (50%; 81 patients), validation (25%, 40 patients), and testing (25%; 41 patients); and 2) an external testing dataset (29 patients) from a third site. Field Strength/Sequence 1.5T, inversion‐recovery segmented gradient‐echo LGE and balanced steady‐state free‐precession cine sequences Assessment Two convolutional neural networks (CNN) were trained for myocardium and scar segmentation, one with and one without LGE‐Cine fusion. For CNN with fusion, the input was two aligned LGE and cine images at matched cardiac phase and anatomical location. For CNN without fusion, only LGE images were used as input. Manual segmentation of the datasets was used as reference standard. Statistical Tests Manual and CNN‐based quantifications of LGE scar burden and of myocardial volume were assessed using Pearson linear correlation coefficients ( r ) and Bland–Altman analysis. Results Both CNN models showed strong agreement with manual quantification of LGE scar burden and myocardium volume. CNN with LGE‐Cine fusion was more robust than CNN without LGE‐Cine fusion, allowing for successful segmentation of significantly more slices (603 [95%] vs. 562 (89%) of 635 slices; P  < 0.001). Also, CNN with LGE‐Cine fusion showed better agreement with manual quantification of LGE scar burden than CNN without LGE‐Cine fusion (%Scar LGE‐cine = 0.82 × %Scar manual , r  = 0.84 vs. %Scar LGE  = 0.47 × %Scar manual , r  = 0.81) and myocardium volume (Volume LGE‐cine = 1.03 × Volume manual , r  = 0.96 vs. Volume LGE  = 0.91 × Volume manual , r  = 0.91). Data Conclusion CNN based LGE‐Cine fusion can improve the robustness and accuracy of automated scar quantification. Level of Evidence 3 Technical Efficacy 1

This content is not available in your region!

Continue researching here.

Having issues? You can contact us here