z-logo
open-access-imgOpen Access
Dropout, a basic and effective regularization method for a deep learning model: a case study
Author(s) -
Brahim Jabir,
Noureddine Falih
Publication year - 2021
Publication title -
indonesian journal of electrical engineering and computer science
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.241
H-Index - 17
eISSN - 2502-4760
pISSN - 2502-4752
DOI - 10.11591/ijeecs.v24.i2.pp1009-1016
Subject(s) - overfitting , artificial intelligence , deep learning , dropout (neural networks) , computer science , machine learning , convolutional neural network , regularization (linguistics) , artificial neural network , set (abstract data type) , stochastic gradient descent , programming language
Deep learning is based on a network of artificial neurons inspired by the human brain. This network is made up of tens or even hundreds of "layers" of neurons. The fields of application of deep learning are indeed multiple; Agriculture is one of those fields in which deep learning is used in various agricultural problems (disease detection, pest detection, and weed identification). A major problem with deep learning is how to create a model that works well, not only on the learning set but also on the validation set. Many approaches used in neural networks are explicitly designed to reduce overfit, possibly at the expense of increasing validation accuracy and training accuracy. In this paper, a basic technique (dropout) is proposed to minimize overfit, we integrated it into a convolutional neural network model to classify weed species and see how it impacts performance, a complementary solution (exponential linear units) are proposed to optimize the obtained results. The results showed that these proposed solutions are practical and highly accurate, enabling us to adopt them in deep learning models.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here