Premium
The Consistency of Estimators in Finite Mixture Models
Author(s) -
Cheng R. C. H.,
Liu W. B.
Publication year - 2001
Publication title -
scandinavian journal of statistics
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 1.359
H-Index - 65
eISSN - 1467-9469
pISSN - 0303-6898
DOI - 10.1111/1467-9469.00257
Subject(s) - mathematics , estimator , consistency (knowledge bases) , test statistic , statistic , parameter space , statistics , distribution (mathematics) , statistical hypothesis testing , mathematical analysis , discrete mathematics
The parameters of a finite mixture model cannot be consistently estimated when the data come from an embedded distribution with fewer components than that being fitted, because the distribution is represented by a subset in the parameter space, and not by a single point. Feng & McCulloch (1996) give conditions, not easily verified, under which the maximum likelihood (ML) estimator will converge to an arbitrary point in this subset. We show that the conditions can be considerably weakened. Even though embedded distributions may not be uniquely represented in the parameter space, estimators of quantities of interest, like the mean or variance of the distribution, may nevertheless actually be consistent in the conventional sense. We give an example of some practical interest where the ML estimators are root of n ‐consistent. Similarly consistent statistics can usually be found to test for a simpler model vs a full model. We suggest a test statistic suitable for a general class of model and propose a parameter‐based bootstrap test, based on this statistic, for when the simpler model is correct.