z-logo
open-access-imgOpen Access
Subgradient methods for huge-scale optimization problems
Author(s) -
Yu. Nesterov
Publication year - 2013
Publication title -
mathematical programming
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 2.358
H-Index - 131
eISSN - 1436-4646
pISSN - 0025-5610
DOI - 10.1007/s10107-013-0686-4
Subject(s) - subgradient method , mathematics , mathematical optimization , diagonal , dimension (graph theory) , minification , coordinate descent , scale (ratio) , scheme (mathematics) , optimization problem , type (biology) , combinatorics , mathematical analysis , ecology , physics , geometry , quantum mechanics , biology
We consider a new class of huge-scale problems, the problems with sparse subgradients. The most important functions of this type are piece-wise linear. For optimization problems with uniform sparsity of corresponding linear operators, we suggest a very efficient implementation of subgradient iterations, which total cost depends logarithmically in the dimension. This technique is based on a recursive update of the results of matrix/vector products and the values of symmetric functions. It works well, for example, for matrices with few nonzero diagonals and for max-type functions. We show that the updating technique can be efficiently coupled with the simplest subgradient methods, the unconstrained minimization method by B.Polyak, and the constrained minimization scheme by N.Shor. Similar results can be obtained for a new nonsmooth random variant of a coordinate descent scheme. We present also the promising results of preliminary computational experiments.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here
Accelerating Research

Address

John Eccles House
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom