z-logo
Premium
Density Estimation by Total Variation Penalized Likelihood Driven by the Sparsity ℓ 1 Information Criterion
Author(s) -
SARDY SYLVAIN,
TSENG PAUL
Publication year - 2010
Publication title -
scandinavian journal of statistics
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 1.359
H-Index - 65
eISSN - 1467-9469
pISSN - 0303-6898
DOI - 10.1111/j.1467-9469.2009.00672.x
Subject(s) - mathematics , estimator , penalty method , smoothness , mathematical optimization , density estimation , differentiable function , statistics , mathematical analysis
.  We propose a non‐linear density estimator, which is locally adaptive, like wavelet estimators, and positive everywhere, without a log‐ or root‐transform. This estimator is based on maximizing a non‐parametric log‐likelihood function regularized by a total variation penalty. The smoothness is driven by a single penalty parameter, and to avoid cross‐validation, we derive an information criterion based on the idea of universal penalty. The penalized log‐likelihood maximization is reformulated as an ℓ 1 ‐penalized strictly convex programme whose unique solution is the density estimate. A Newton‐type method cannot be applied to calculate the estimate because the ℓ 1 ‐penalty is non‐differentiable. Instead, we use a dual block coordinate relaxation method that exploits the problem structure. By comparing with kernel, spline and taut string estimators on a Monte Carlo simulation, and by investigating the sensitivity to ties on two real data sets, we observe that the new estimator achieves good L 1 and L 2 risk for densities with sharp features, and behaves well with ties.

This content is not available in your region!

Continue researching here.

Having issues? You can contact us here