Premium
Unsupervised learning for deformable registration of thoracic CT and cone‐beam CT based on multiscale features matching with spatially adaptive weighting
Author(s) -
Duan Luwen,
Ni Xinye,
Liu Qi,
Gong Lun,
Yuan Gang,
Li Ming,
Yang Xiaodong,
Fu Tianxiao,
Zheng Jian
Publication year - 2020
Publication title -
medical physics
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 1.473
H-Index - 180
eISSN - 2473-4209
pISSN - 0094-2405
DOI - 10.1002/mp.14464
Subject(s) - image registration , artificial intelligence , cone beam computed tomography , computer science , metric (unit) , computer vision , affine transformation , weighting , similarity (geometry) , upsampling , deep learning , voxel , pattern recognition (psychology) , computed tomography , image (mathematics) , mathematics , medicine , radiology , operations management , pure mathematics , economics
Purpose Cone‐beam computed tomography (CBCT) is a common on‐treatment imaging widely used in image‐guided radiotherapy. Fast and accurate registration between the on‐treatment CBCT and planning CT is significant for and precise adaptive radiotherapy treatment (ART). However, existing CT–CBCT registration methods, which are mostly affine or time‐consuming intensity‐ based deformation registration, still need further study due to the considerable CT–CBCT intensity discrepancy and the artifacts in low‐quality CBCT images. In this paper, we propose a deep learning‐based CT–CBCT registration model to promote rapid and accurate CT–CBCT registration for radiotherapy. Methods The proposed CT–CBCT registration model consists of a registration network and an innovative deep similarity metric network. The registration network is a novel fully convolution network adapted specially for patch‐wise CT–CBCT registration. The metric network, going beyond intensity, automatically evaluates the high‐dimensional attribute‐based dissimilarity between the registered CT and CBCT images. In addition, considering the artifacts in low‐quality CBCT images, we add spatial weighting (SW) block to adaptively attach more importance to those informative voxels while inhibit the interference of artifact regions. Such SW‐based metric network is expected to extract the most meaningful and discriminative deep features, and form a more reliable CT–CBCT similarity measure to train the registration network. Results We evaluate the proposed method on clinical thoracic CBCT and CT dataset, and compare the registration results with some other common image similarity metrics and some state‐of‐the‐art registration algorithms. The proposed method provides the highest Structural Similarity index (86.17 ± 5.09), minimum Target Registration Error of landmarks (2.37 ± 0.32 mm), and the best DSC coefficient (78.71 ± 10.95) of tumor volumes. Moreover, our model also obtains comparable distance error of lung surfaces (1.75 ± 0.35 mm). Conclusion The proposed model shows both efficiency and efficacy for reliable thoracic CT–CBCT registration, and can generate the matched CT and CBCT images within few seconds, which is of great significance to clinical radiotherapy.