
Development of a hybrid framework to characterize red lesions for early detection of diabetic retinopathy
Author(s) -
Deepashree Devaraj,
Sachin Kumar
Publication year - 2019
Publication title -
indonesian journal of electrical engineering and computer science
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.241
H-Index - 17
eISSN - 2502-4760
pISSN - 2502-4752
DOI - 10.11591/ijeecs.v13.i3.pp962-973
Subject(s) - diabetic retinopathy , fundus (uterus) , retinal , computer science , workload , artificial intelligence , grading (engineering) , software , optic disc , ophthalmology , computer vision , optometry , medicine , diabetes mellitus , operating system , civil engineering , engineering , endocrinology
Diabetic retinopathy (DR) is one of the driving reasons for visual deficiency, affecting people globally. Currently, the ophthalmologists need to inspect enormous number of images with a specific end goal to perform mass screening of Diabetic retinopathy. In this paper, an efficient Computer aided system based on a Hybrid framework is proposed for the early diagnosis of DR by extracting the early DR lesions such as microaneurysms and hemorrhages. The development of such a screening system would decrease the workload of the ophthalmologists, as they now need to look at those retinal images that are analyzed by the system, as irregularities. The retinal images obtained from standard retinal databases and Hospitals are pre-processed followed by the detection and elimination of blood vessels, optic disk and exudates. Quick propagation Neural Network is used for training and testing of the retinal fundus images since it has the fastest execution time. Linear Classification and Multi class classification of retinal fundus images are performed for the classification and grading of retinal fundus images into normal and abnormal using Alyuda Neuro-Intelligence software. A patient database is created using MySQL to store the required details of the patient and a graphical user interface is developed for an efficient usage of the system. The execution time of the system is found to be 7-9 seconds and is tested on 270 retinal fundus images. The precision and accuracy of the algorithm is 92.5% and 93.9%, respectively.