Premium
Lower bounds on Bayes risks for estimating a normal variance: With applications
Author(s) -
Vidakovic Brani,
Dasgupta Anirban
Publication year - 1995
Publication title -
canadian journal of statistics
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.804
H-Index - 51
eISSN - 1708-945X
pISSN - 0319-5724
DOI - 10.2307/3315367
Subject(s) - mathematics , convexity , bayes' theorem , variance (accounting) , complement (music) , quadratic equation , bayesian probability , upper and lower bounds , statistics , mathematical analysis , accounting , geometry , complementation , financial economics , economics , gene , phenotype , biochemistry , business , chemistry
Brown and Gajek (1990) gave useful lower bounds on Bayes risks, which improve on earlier bounds by various authors. Many of these use the information inequality. For estimating a normal variance using the invariant quadratic loss and any arbitrary prior on the reciprocal of the variance that is a mixture of Gamma distributions, we obtain lower bounds on Bayes risks that are different from Borovkov‐Sakhanienko bounds. The main tool is convexity of appropriate functionals as opposed to the information inequality. The bounds are then applied to many specific examples, including the multi‐Bayesian setup (Zidek and his coauthors). Subsequent use of moment theory and geometry gives a number of new results on efficiency of estimates which are linear in the sufficient statistic. These results complement earlier results of Donoho, Liu and MacGibbon (1990), Johnstone and MacGibbon (1992) and Vidakovic and DasGupta (1994) for the location case.