z-logo
Premium
Stable Asymptotics for M ‐estimators
Author(s) -
La Vecchia Davide
Publication year - 2016
Publication title -
international statistical review
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 1.051
H-Index - 54
eISSN - 1751-5823
pISSN - 0306-7734
DOI - 10.1111/insr.12102
Subject(s) - mathematics , estimator , bounded function , gaussian , matrix norm , edgeworth series , inference , euclidean geometry , stability (learning theory) , statistics , mathematical analysis , computer science , physics , eigenvalues and eigenvectors , geometry , quantum mechanics , artificial intelligence , machine learning
Summary We review some first‐order and higher‐order asymptotic techniques for M ‐estimators, and we study their stability in the presence of data contaminations. We show that the estimating function ( ψ ) and its derivative with respect to the parameter ( ∇θ ⊤ψ ) play a central role. We discuss in detail the first‐order Gaussian density approximation, saddlepoint density approximation, saddlepoint test, tail area approximation via the Lugannani–Rice formula and empirical saddlepoint density approximation (a technique related to the empirical likelihood method). For all these asymptotics, we show that a bounded ψ (in the Euclidean norm) and a bounded∇θ ⊤ψ (e.g. in the Frobenius norm) yield stable inference in the presence of data contamination. We motivate and illustrate our findings by theoretical and numerical examples about the benchmark case of one‐dimensional location model.

This content is not available in your region!

Continue researching here.

Having issues? You can contact us here