Premium
IMPROVING UPON THE BEST INVARIANT ESTIMATOR IN MULTIVARIATE LOCATION PROBLEMS
Author(s) -
Puri Madan L.,
Ralescu Dan A.
Publication year - 1983
Publication title -
australian journal of statistics
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.434
H-Index - 41
eISSN - 1467-842X
pISSN - 0004-9581
DOI - 10.1111/j.1467-842x.1983.tb01217.x
Subject(s) - estimator , mathematics , invariant estimator , minimax estimator , invariant (physics) , mean squared error , efficient estimator , convolution (computer science) , dimension (graph theory) , minimum variance unbiased estimator , statistics , mathematical optimization , combinatorics , computer science , artificial intelligence , artificial neural network , mathematical physics
Summary We are concerned with estimators which improve upon the best invariant estimator, in estimating a location parameter θ. If the loss function is L(θ ‐ a) with L convex, we give sufficient conditions for the inadmissibility of δ 0 (X) = X. If the loss is a weighted sum of squared errors, we find various classes of estimators δ which are better than δ 0 . In general, δ is the convolution of δ 1 (an estimator which improves upon δ 0 outside of a compact set) with a suitable probability density in R p . The critical dimension of inadmissibility depends on the estimator δ 1 We also give several examples of estimators δ obtained in this way and state some open problems.