z-logo
Premium
Iterative Alpha Expansion for estimating gradient‐sparse signals from linear measurements
Author(s) -
Xu Sheng,
Fan Zhou
Publication year - 2021
Publication title -
journal of the royal statistical society: series b (statistical methodology)
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 6.523
H-Index - 137
eISSN - 1467-9868
pISSN - 1369-7412
DOI - 10.1111/rssb.12407
Subject(s) - mathematics , gradient descent , algorithm , total variation denoising , piecewise , logarithm , laplacian matrix , mathematical optimization , graph , combinatorics , computer science , mathematical analysis , noise reduction , artificial intelligence , artificial neural network
We consider estimating a piecewise‐constant image, or a gradient‐sparse signal on a general graph, from noisy linear measurements. We propose and study an iterative algorithm to minimize a penalized least‐squares objective, with a penalty given by the “ ℓ 0 ‐norm” of the signal’s discrete graph gradient. The method uses a non‐convex variant of proximal gradient descent, applying the alpha‐expansion procedure to approximate the proximal mapping in each iteration, and using a geometric decay of the penalty parameter across iterations to ensure convergence. Under a cut‐restricted isometry property for the measurement design, we prove global recovery guarantees for the estimated signal. For standard Gaussian designs, the required number of measurements is independent of the graph structure, and improves upon worst‐case guarantees for total‐variation (TV) compressed sensing on the 1‐D line and 2‐D lattice graphs by polynomial and logarithmic factors respectively. The method empirically yields lower mean‐squared recovery error compared with TV regularization in regimes of moderate undersampling and moderate to high signal‐to‐noise, for several examples of changepoint signals and gradient‐sparse phantom images.

This content is not available in your region!

Continue researching here.

Having issues? You can contact us here