Premium
Derivative Free Optimization in Higher Dimension
Author(s) -
Ahmed Shamsuddin
Publication year - 2001
Publication title -
international transactions in operational research
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 1.032
H-Index - 52
eISSN - 1475-3995
pISSN - 0969-6016
DOI - 10.1111/1475-3995.00266
Subject(s) - simplex , simplex algorithm , dimension (graph theory) , mathematical optimization , function (biology) , mathematics , derivative (finance) , algorithm , linear programming , computer science , combinatorics , evolutionary biology , financial economics , economics , biology
Non‐linear optimizations that do not require explicit or implicit derivative information of an objective function are an alternate search strategy when the derivative of the objective function is not available. In factorial design, the number of trials for experimental identification method in E m is about ( m + 1). These ( m + 1) equally spaced points are allowed to form a geometry that is known as regular simplex. The simplex method is attributed to Spendley, Hext and Himsworth. The method is improved by maintaining a set of ( m + 1) points in m dimensional space to generate a non‐regular simplex. This study suggests re‐scaling the simplex in higher dimensions for a restart phase. The direction of search is also changed when the simplex degenerates. The performance of this derivative free search method is measured based on the number of function evaluations, number of restart attempts and improvements in function value. An algorithm that describes the improved method is presented and compared with the Nelder and Mead simplex method. The performance of this algorithm is also tested with artificial neural network (ANN) problem. The numbers of function evaluations are about 40 times less with the improved method against the Nelder and Mead (1965) method to train an ANN problem with 36 variables.