Premium
Multiview convolutional neural networks for lung nodule classification
Author(s) -
Liu Kui,
Kang Guixia
Publication year - 2017
Publication title -
international journal of imaging systems and technology
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.359
H-Index - 47
eISSN - 1098-1098
pISSN - 0899-9457
DOI - 10.1002/ima.22206
Subject(s) - computer science , convolutional neural network , artificial intelligence , binary classification , binary number , embedding , nodule (geology) , receiver operating characteristic , representation (politics) , pattern recognition (psychology) , image (mathematics) , contextual image classification , ternary operation , deep learning , lung cancer , machine learning , mathematics , support vector machine , medicine , pathology , paleontology , arithmetic , politics , political science , law , biology , programming language
To find a better way to screen early lung cancer, motivated by the great success of deep learning, we empirically investigate the challenge of classifying lung nodules in computed tomography (CT) in an end‐to‐end manner. Multi‐view convolutional neural networks (MV‐CNN) are proposed in this article for lung nodule classification. Unlike the traditional CNNs, a MV‐CNN takes multiple views of each entered nodule. We carry out a binary classification (benign and malignant) and a ternary classification (benign, primary malignant, and metastatic malignant) using the Lung Image Database Consortium and Image Database Resource Initiative database. The results show that, for binary or ternary classifications, the multiview strategy produces higher accuracy than the single view method, even for cases that are over‐fitted. Our model achieves an error rate of 5.41 and 13.91% for binary and ternary classifications, respectively. Finally, the receiver operating characteristic curve and t‐distributed stochastic neighbor embedding algorithm are used to analyze the models. The results reveal that the deep features learned by the model proposed in this article have a higher separability than features from the image space and the multiview strategies; therefore, researchers can get better representation. © 2017 Wiley Periodicals, Inc. Int J Imaging Syst Technol, 27, 12–22, 2017