Premium
Iteratively reweighted least squares minimization for sparse recovery
Author(s) -
Daubechies Ingrid,
DeVore Ronald,
Fornasier Massimo,
Güntürk C. Si̇nan
Publication year - 2010
Publication title -
communications on pure and applied mathematics
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 3.12
H-Index - 115
eISSN - 1097-0312
pISSN - 0010-3640
DOI - 10.1002/cpa.20303
Subject(s) - mathematics , restricted isometry property , combinatorics , hyperplane , iteratively reweighted least squares , limit point , norm (philosophy) , sequence (biology) , limit (mathematics) , matrix (chemical analysis) , weight , algorithm , element (criminal law) , compressed sensing , discrete mathematics , non linear least squares , mathematical analysis , pure mathematics , lie algebra , estimation theory , materials science , biology , political science , law , composite material , genetics
Under certain conditions (known as the restricted isometry property , or RIP) on the m × N matrix Φ (where m < N ), vectors x ∈ ℝ N that are sparse (i.e., have most of their entries equal to 0) can be recovered exactly from y := Φ x even though Φ −1 ( y ) is typically an ( N − m )—dimensional hyperplane; in addition, x is then equal to the element in Φ −1 ( y ) of minimal 1 ‐norm. This minimal element can be identified via linear programming algorithms. We study an alternative method of determining x , as the limit of an iteratively reweighted least squares (IRLS) algorithm. The main step of this IRLS finds, for a given weight vector w , the element in Φ −1 ( y ) with smallest 2 ( w )‐norm. If x ( n ) is the solution at iteration step n , then the new weight w ( n ) is defined by w i ( n ):= [| x i ( n ) | 2 + ε n 2 ] −1/2 , i = 1, …, N , for a decreasing sequence of adaptively defined ε n ; this updated weight is then used to obtain x ( n + 1) and the process is repeated. We prove that when Φ satisfies the RIP conditions, the sequence x ( n ) converges for all y , regardless of whether Φ −1 ( y ) contains a sparse vector. If there is a sparse vector in Φ −1 ( y ), then the limit is this sparse vector, and when x ( n ) is sufficiently close to the limit, the remaining steps of the algorithm converge exponentially fast ( linear convergence in the terminology of numerical optimization). The same algorithm with the “heavier” weight w i ( n )= [| x i ( n ) | 2 + ε n 2 ] −1+τ/2 , i = 1, …, N , where 0 < τ < 1, can recover sparse solutions as well; more importantly, we show its local convergence is superlinear and approaches a quadratic rate for τ approaching 0. © 2009 Wiley Periodicals, Inc.
Accelerating Research
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom
Address
John Eccles HouseRobert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom