z-logo
open-access-imgOpen Access
A Method for Minimizing a Sum of Squares of Non-Linear Functions Without Calculating Derivatives
Author(s) -
M. J. D. Powell
Publication year - 1965
Publication title -
the computer journal
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.319
H-Index - 64
eISSN - 1460-2067
pISSN - 0010-4620
DOI - 10.1093/comjnl/7.4.303
Subject(s) - overdetermined system , mathematics , generalization , least squares function approximation , explained sum of squares , linear least squares , non linear least squares , linear equation , set (abstract data type) , convergence (economics) , total least squares , residual sum of squares , generalized least squares , linear model , mathematical analysis , computer science , statistics , regression analysis , estimator , economics , programming language , economic growth
The minimum of a sum of squares can often be found very efficiently by applying a generalization of the least squares method for solving overdetermined linear simultaneous equations. An original method that has comparable convergence but, unlike the classical procedure, does not require any derivatives is described and discussed in this paper. The number of times the individual terms of the sum of squares have to be calculated is approximately proportional to the number of variables. Finding a solution to a set of fifty non-linear equations in fifty unknowns required the left-hand sides of the equations to be worked out fewer than two hundred times.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here
Accelerating Research

Address

John Eccles House
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom