z-logo
open-access-imgOpen Access
A New Hybrid Optimizer for Global Optimization Based on a Comparative Study Remarks of Classical Gradient Descent Variants
Author(s) -
Mouad Touarsi,
Driss Gretete,
Abdelmajid Elouadi
Publication year - 2021
Publication title -
statistics, optimization and information computing
Language(s) - English
Resource type - Journals
eISSN - 2311-004X
pISSN - 2310-5070
DOI - 10.19139/soic-2310-5070-1005
Subject(s) - benchmark (surveying) , maxima and minima , gradient descent , descent (aeronautics) , range (aeronautics) , convexity , convergence (economics) , mathematical optimization , mathematics , descent direction , stochastic gradient descent , function (biology) , algorithm , computer science , artificial intelligence , artificial neural network , mathematical analysis , materials science , geodesy , aerospace engineering , evolutionary biology , economic growth , financial economics , engineering , economics , composite material , biology , geography
In this paper, we present an empirical comparison of some Gradient Descent variants used to solve globaloptimization problems for large search domains. The aim is to identify which one of them is more suitable for solving an optimization problem regardless of the features of the used test function. Five variants of Gradient Descent were implemented in the R language and tested on a benchmark of five test functions. We proved the dependence between the choice of the variant and the obtained performances using the khi-2 test in a sample of 120 experiments. Those test functions vary on convexity, the number of local minima, and are classified according to some criteria. We had chosen a range of values for each algorithm parameter. Results are compared in terms of accuracy and convergence speed. Based on the obtained results,we defined the priority of usage for those variants and we contributed by a new hybrid optimizer. The new optimizer is testedin a benchmark of well-known test functions and two real applications are proposed. Except for the classical gradient descent algorithm, only stochastic versions of those variants are considered in this paper.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here