z-logo
Premium
Generalizable, sequence‐invariant deep learning image reconstruction for subspace‐constrained quantitative MRI
Author(s) -
Hu Zheyuan,
Chen Zihao,
Cao Tianle,
Lee HsuLei,
Xie Yibin,
Li Debiao,
Christodoulou Anthony G.
Publication year - 2025
Publication title -
magnetic resonance in medicine
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 1.696
H-Index - 225
eISSN - 1522-2594
pISSN - 0740-3194
DOI - 10.1002/mrm.30433
Subject(s) - sequence (biology) , subspace topology , mathematics , mean squared error , algorithm , artificial intelligence , pattern recognition (psychology) , combinatorics , computer science , statistics , biology , genetics
Abstract Purpose To develop a deep subspace learning network that can function across different pulse sequences. Methods A contrast‐invariant component‐by‐component (CBC) network structure was developed and compared against previously reported spatiotemporal multicomponent (MC) structure for reconstructing MR Multitasking images. A total of 130, 167, and 16 subjects were imaged using T 1 , T 1 ‐T 2 , and T 1 ‐T 2 ‐T 2 * $$ {\mathrm{T}}_2^{\ast } $$ ‐fat fraction (FF) mapping sequences, respectively. We compared CBC and MC networks in matched‐sequence experiments (same sequence for training and testing), then examined their cross‐sequence performance and generalizability by unmatched‐sequence experiments (different sequences for training and testing). A “universal” CBC network was also evaluated using mixed‐sequence training (combining data from all three sequences). Evaluation metrics included image normalized root mean squared error and Bland–Altman analyses of end‐diastolic maps, both versus iteratively reconstructed references. Results The proposed CBC showed significantly better normalized root mean squared error than MC in both matched‐sequence and unmatched‐sequence experiments ( p  < 0.001), fewer structural details in quantitative error maps, and tighter limits of agreement. CBC was more generalizable than MC (smaller performance loss; p  = 0.006 in T 1 and p  < 0.001 in T 1 ‐T 2 from matched‐sequence testing to unmatched‐sequence testing) and additionally allowed training of a single universal network to reconstruct images from any of the three pulse sequences. The mixed‐sequence CBC network performed similarly to matched‐sequence CBC in T 1 ( p  = 0.178) and T 1 ‐T 2 ( p  = 0121), where training data were plentiful, and performed better in T 1 ‐T 2 ‐T 2 * $$ {\mathrm{T}}_2^{\ast } $$ ‐FF ( p  < 0.001) where training data were scarce. Conclusion Contrast‐invariant learning of spatial features rather than spatiotemporal features improves performance and generalizability, addresses data scarcity, and offers a pathway to universal supervised deep subspace learning.

This content is not available in your region!

Continue researching here.

Having issues? You can contact us here