
The most optimal performance of the Levenberg-Marquardt algorithm based on neurons in the hidden layer
Author(s) -
Hindayati Mustafidah,
Christina Priscilla Putri,
Hery Harjono,
S. Suwarsito
Publication year - 2019
Publication title -
journal of physics. conference series
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.21
H-Index - 85
eISSN - 1742-6596
pISSN - 1742-6588
DOI - 10.1088/1742-6596/1402/6/066099
Subject(s) - mean squared error , artificial neural network , algorithm , computer science , levenberg–marquardt algorithm , layer (electronics) , function (biology) , artificial intelligence , mathematics , statistics , chemistry , organic chemistry , evolutionary biology , biology
The training algorithm is the main driver in artificial neural networks. The performance of the training algorithm is influenced by several parameters including the number of neurons in the input and hidden layers, epoch maximum, activation function, and the size of the learning rate (lr). One of the benchmarks for optimizing the performance of training algorithms can be viewed from the error or MSE (mean square error) produced. The smaller the error, the more optimal the performance. The test conducted in the previous study obtained information that the most optimal training algorithm based on the smallest MSE produced was Levenberg-Marquardt (LM) with an average MSE=0.001 at the level of α=5% and using 10 neurons in the hidden layer. Therefore, this study aims to test the LM algorithm using several variations in the number of neurons in hidden layers. In this study, the LM algorithm was tested using 5 neurons in the input layer, and 2, 4, 5, 7, 9 neurons in hidden layers, and the same parameters as previous study. This study uses a mixed method that is developing computer programs and quantitative testing of program output data using statistical test. The results showed that the LM algorithm had the most optimal performance using 9 neurons in the hidden layer at the level of lr=0.5 with the smallest error of 0.000137501±0.000178355.