
A Survey on Prevention of Overfitting in Convolution Neural Networks Using Machine Learning Techniques
Author(s) -
M. Koteswara Rao,
V. V. R. Prasad,
P. Sai Ravi Teja,
Zindavali,
O Phanindra Reddy
Publication year - 2018
Publication title -
international journal of engineering and technology
Language(s) - English
Resource type - Journals
ISSN - 2227-524X
DOI - 10.14419/ijet.v7i2.32.15399
Subject(s) - overfitting , dropout (neural networks) , computer science , artificial neural network , artificial intelligence , machine learning , benchmark (surveying) , regularization (linguistics) , deep neural networks , geodesy , geography
Deep neural nets with a vast quantity of parameters are very effective machine getting to know structures. However, overfitting is an extreme problem in such networks. Massive networks are also sluggish to use, making it difficult to cope with overfitting by combining the predictions of many distinct large neural nets at test time. Dropout is a method for addressing this problem. The important thing concept is to randomly drop units (at the side of their connections) from the neural network for the duration of education. This prevents units from co-adapting an excessive amount of. during schooling, dropout samples from an exponential quantity of various "thinned" networks. At take a look at the time, it is simple to precise the impact of averaging the predictions of plenty of these thinned networks through in reality using a single unthinned network that has smaller weights. This considerably minimize overfitting and provides fundamental enhancements over other regularization techniques. We show that dropout enhance the overall performance of neural networks on manage gaining knowledge of obligations in imaginative and prescient, speech reputation, document type and computational biology, acquiring today's effects on many benchmark facts sets.