Premium
Reduction constraints for the global optimization of NLPs
Author(s) -
Liberti Leo
Publication year - 2004
Publication title -
international transactions in operational research
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 1.032
H-Index - 52
eISSN - 1475-3995
pISSN - 0969-6016
DOI - 10.1111/j.1475-3995.2004.00438.x
Subject(s) - reduction (mathematics) , relaxation (psychology) , upper and lower bounds , mathematical optimization , regular polygon , convergence (economics) , mathematics , function (biology) , computer science , linear programming relaxation , linear programming , algorithm , combinatorics , geometry , psychology , social psychology , mathematical analysis , evolutionary biology , economics , biology , economic growth
Convergence of branch‐and‐bound algorithms for the solution of NLPs is obtained by finding ever‐nearer lower and upper bounds to the objective function. The lower bound is calculated by constructing a convex relaxation of the NLP. Reduction constraints are new linear problem constraints which are (a) linearly independent from the existing constraints; (b) redundant with reference to the original NLP formulation; (c) not redundant with reference to its convex relaxation. Thus, they can be successfully employed to reduce the feasible region of the convex relaxation without cutting the feasible region of the original NLP.