Optimal estimation of $\ell_1$-regularization prior from a regularized empirical Bayesian risk standpoint
Author(s) -
Hui Huang,
Eldad Haber,
Lior Horesh
Publication year - 2012
Publication title -
inverse problems and imaging
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.755
H-Index - 40
eISSN - 1930-8345
pISSN - 1930-8337
DOI - 10.3934/ipi.2012.6.447
Subject(s) - regularization (linguistics) , prior probability , inverse problem , mathematical optimization , mathematics , bayesian probability , matrix (chemical analysis) , algorithm , computer science , artificial intelligence , mathematical analysis , statistics , materials science , composite material
We address the problem of prior matrix estimation for the solution of $\ell_1$-regularized ill-posed inverse problems. From a Bayesian viewpoint, we show that such a matrix can be regarded as an influence matrix in a multivariate $\ell_1$-Laplace density function. Assuming a training set is given, the prior matrix design problem is cast as a maximum likelihood term with an additional sparsity-inducing term. This formulation results in an unconstrained yet nonconvex optimization problem. Memory requirements as well as computation of the nonlinear, nonsmooth sub-gradient equations are prohibitive for large-scale problems. Thus, we introduce an iterative algorithm to design efficient priors for such large problems. We further demonstrate that the solutions of ill-posed inverse problems by incorporation of $\ell_1$-regularization using the learned prior matrix perform generally better than commonly used regularization techniques where the prior matrix is chosen a-priori.
Accelerating Research
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom
Address
John Eccles HouseRobert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom