z-logo
open-access-imgOpen Access
Automatic classification of dual-modalilty, smartphone-based oral dysplasia and malignancy images using deep learning
Author(s) -
Bofan Song,
Sumsum P. Sunny,
Ross D. Uthoff,
Sanjana Patrick,
Amritha Suresh,
Trupti Kolur,
Keerthi Gurushanth,
Afarin Anbarani,
Petra WilderSmith,
Moni Abraham Kuriakose,
Praveen Birur,
Jeffrey J. Rodrı́guez,
Rongguang Liang
Publication year - 2018
Publication title -
biomedical optics express
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 1.362
H-Index - 86
ISSN - 2156-7085
DOI - 10.1364/boe.9.005318
Subject(s) - deep learning , artificial intelligence , computer science , convolutional neural network , transfer of learning , autofluorescence , contextual image classification , pattern recognition (psychology) , computer vision , image (mathematics) , optics , physics , fluorescence
With the goal to screen high-risk populations for oral cancer in low- and middle-income countries (LMICs), we have developed a low-cost, portable, easy to use smartphone-based intraoral dual-modality imaging platform. In this paper we present an image classification approach based on autofluorescence and white light images using deep learning methods. The information from the autofluorescence and white light image pair is extracted, calculated, and fused to feed the deep learning neural networks. We have investigated and compared the performance of different convolutional neural networks, transfer learning, and several regularization techniques for oral cancer classification. Our experimental results demonstrate the effectiveness of deep learning methods in classifying dual-modal images for oral cancer detection.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here
Accelerating Research

Address

John Eccles House
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom