Research Library

open-access-imgOpen AccessEfficiency of stochastic coordinate proximal gradient methods on nonseparable composite optimization
Author(s)
I. Necoara,
F. Chorobura
Publication year2024
This paper deals with composite optimization problems having the objectivefunction formed as the sum of two terms, one has Lipschitz continuous gradientalong random subspaces and may be nonconvex and the second term is simple anddifferentiable, but possibly nonconvex and nonseparable. Under these settingswe design a stochastic coordinate proximal gradient method which takes intoaccount the nonseparable composite form of the objective function. Thisalgorithm achieves scalability by constructing at each iteration a localapproximation model of the whole nonseparable objective function along a randomsubspace with user-determined dimension. We outline efficient techniques forselecting the random subspace, yielding an implementation that has low costper-iteration while also achieving fast convergence rates. We present aprobabilistic worst-case complexity analysis for our stochastic coordinateproximal gradient method in convex and nonconvex settings, in particular weprove high-probability bounds on the number of iterations before a givenoptimality is achieved. Extensive numerical results also confirm the efficiencyof our algorithm.
Language(s)English

Seeing content that should not be on Zendy? Contact us.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here