z-logo
open-access-imgOpen Access
Learning Rate Optimization in CNN for Accurate Ophthalmic Classification
Author(s) -
Mahmoud Smaida,
Serhii Yaroshchak,
Ahmed Y. Ben Sasi
Publication year - 2021
Publication title -
international journal of innovative technology and exploring engineering
Language(s) - English
Resource type - Journals
ISSN - 2278-3075
DOI - 10.35940/ijitee.b8259.0210421
Subject(s) - artificial intelligence , convolutional neural network , computer science , deep learning , machine learning , adaptive optimization , generalization , python (programming language) , artificial neural network , mathematics , mathematical analysis , operating system
One of the most important hyper-parameters for model training and generalization is the learning rate. Recently, many research studies have shown that optimizing the learning rate schedule is very useful for training deep neural networks to get accurate and efficient results. In this paper, different learning rate schedules using some comprehensive optimization techniques have been compared in order to measure the accuracy of a convolutional neural network CNN model to classify four ophthalmic conditions. In this work, a deep learning CNN based on Keras and TensorFlow has been deployed using Python on a database that contains 1692 images, which consists of four types of ophthalmic cases: Glaucoma, Myopia, Diabetic retinopathy, and Normal eyes. The CNN model has been trained on Google Colab. GPU with different learning rate schedules and adaptive learning algorithms. Constant learning rate, time-based decay, step-based decay, exponential decay, and adaptive learning rate optimization techniques for deep learning have been addressed. Adam adaptive learning rate method. has outperformed the other optimization techniques and achieved the best model accuracy of 92.58% for training set and 80.49% for validation datasets, respectively.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here