z-logo
open-access-imgOpen Access
Robust and Interpretable Convolutional Neural Networks to Detect Glaucoma in Optical Coherence Tomography Images
Author(s) -
Kaveri A. Thakoor,
Sharath C. Koorathota,
Donald C. Hood,
Paul Sajda
Publication year - 2020
Publication title -
ieee transactions on biomedical engineering
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 1.148
H-Index - 200
eISSN - 1558-2531
pISSN - 0018-9294
DOI - 10.1109/tbme.2020.3043215
Subject(s) - bioengineering , computing and processing , components, circuits, devices and systems , communication, networking and broadcast technologies
Recent studies suggest that deep learning systems can now achieve performance on par with medical experts in diagnosis of disease. A prime example is in the field of ophthalmology, where convolutional neural networks (CNNs) have been used to detect retinal and ocular diseases. However, this type of artificial intelligence (AI) has yet to be adopted clinically due to questions regarding robustness of the algorithms to datasets collected at new clinical sites and a lack of explainability of AI-based predictions, especially relative to those of human expert counterparts. In this work, we develop CNN architectures that demonstrate robust detection of glaucoma in optical coherence tomography (OCT) images and test with concept activation vectors (TCAVs) to infer what image concepts CNNs use to generate predictions. Furthermore, we compare TCAV results to eye fixations of clinicians, to identify common decision-making features used by both AI and human experts. We find that employing fine-tuned transfer learning and CNN ensemble learning create end-to-end deep learning models with superior robustness compared to previously reported hybrid deep-learning/machine-learning models, and TCAV/eye-fixation comparison suggests the importance of three OCT report sub-images that are consistent with areas of interest fixated upon by OCT experts to detect glaucoma. The pipeline described here for evaluating CNN robustness and validating interpretable image concepts used by CNNs with eye movements of experts has the potential to help standardize the acceptance of new AI tools for use in the clinic.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here