z-logo
open-access-imgOpen Access
Application of deep learning techniques for characterization of 3D radiological datasets: a pilot study for detection of intravenous contrast in breast MRI
Author(s) -
Krishnd Keshava Murthy,
Pierre Elnajjar,
Amin El-Rowmeim,
Hao-Hsin Shih,
Ian Pan,
Richard Kinh Gian,
Krishna Juluru
Publication year - 2019
Publication title -
pubmed central
Language(s) - English
Resource type - Conference proceedings
SCImago Journal Rank - 0.192
H-Index - 176
pISSN - 0277-786X
DOI - 10.1117/12.2513809
Subject(s) - dicom , computer science , scanner , convolutional neural network , artificial intelligence , deep learning , medical imaging , contrast (vision) , computer vision , image quality , image (mathematics)
Categorization of radiological images according to characteristics such as modality, scanner parameters, body part etc, is important for quality control, clinical efficiency and research. The metadata associated with images stored in the DICOM format reliably captures scanner settings such as tube current in CT or echo time (TE) in MRI. Other parameters such as image orientation, body part examined and presence of intravenous contrast, however, are not inherent to the scanner settings, and therefore require user input which is prone to human error. There is a general need for automated approaches that will appropriately categorize images, even with parameters that are not inherent to the scanner settings. These approaches should be able to process both planar 2D images and full 3D scans. In this work, we present a deep learning based approach for automatically detecting one such parameter: the presence or absence of intravenous contrast in 3D MRI scans. Contrast is manually injected by radiology staff during the imaging examination, and its presence cannot be automatically recorded in the DICOM header by the scanner. Our classifier is a convolutional neural network (CNN) based on the ResNet architecture. Our data consisted of 1000 breast MRI scans (500 scans with and 500 scans without intravenous contrast), used for training and testing a CNN on 80%/20% split, respectively. The labels for the scans were obtained from the series descriptions created by certified radiological technologists. Preliminary results of our classifier are very promising with an area under the ROC curve (AUC) of 0.98, sensitivity and specificity of 1.0 and 0.9 respectively (at the optimal ROC cut-off point), demonstrating potential usefulness in both clinical as well as research settings.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here