A Conjugate Gradient Method with Global Convergence for Large-Scale Unconstrained Optimization Problems
Author(s) -
Shengwei Yao,
Xiwen Lu,
Zengxin Wei
Publication year - 2013
Publication title -
journal of applied mathematics
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.307
H-Index - 43
eISSN - 1687-0042
pISSN - 1110-757X
DOI - 10.1155/2013/730454
Subject(s) - conjugate gradient method , nonlinear conjugate gradient method , conjugate residual method , derivation of the conjugate gradient method , gradient descent , convergence (economics) , gradient method , line search , mathematics , scale (ratio) , mathematical optimization , conjugate , descent (aeronautics) , biconjugate gradient method , computer science , mathematical analysis , artificial neural network , artificial intelligence , physics , radius , computer security , quantum mechanics , meteorology , economics , economic growth
The conjugate gradient (CG) method has played a special role in solving large-scale nonlinearoptimization problems due to the simplicity of their very low memory requirements. This paperproposes a conjugate gradient method which is similar to Dai-Liao conjugate gradient method (Dai and Liao, 2001)but has stronger convergence properties. The given method possesses the sufficient descent condition,and is globally convergent under strong Wolfe-Powell (SWP) line search for general function. Ournumerical results show that the proposed method is very efficient for the test problems
Accelerating Research
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom
Address
John Eccles HouseRobert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom