z-logo
Premium
Deconvolution methods for non‐parametric inference in two‐level mixed models
Author(s) -
Hall Peter,
Maiti Tapabrata
Publication year - 2009
Publication title -
journal of the royal statistical society: series b (statistical methodology)
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 6.523
H-Index - 137
eISSN - 1467-9868
pISSN - 1369-7412
DOI - 10.1111/j.1467-9868.2009.00705.x
Subject(s) - deconvolution , smoothing , statistical inference , mathematics , parametric statistics , inference , statistics , consistency (knowledge bases) , algorithm , computer science , artificial intelligence , geometry
Summary.  We develop a general non‐parametric approach to the analysis of clustered data via random effects. Assuming only that the link function is known, the regression functions and the distributions of both cluster means and observation errors are treated non‐parametrically. Our argument proceeds by viewing the observation error at the cluster mean level as though it were a measurement error in an errors‐in‐variables problem, and using a deconvolution argument to access the distribution of the cluster mean. A Fourier deconvolution approach could be used if the distribution of the error‐in‐variables were known. In practice it is unknown, of course, but it can be estimated from repeated measurements, and in this way deconvolution can be achieved in an approximate sense. This argument might be interpreted as implying that large numbers of replicates are necessary for each cluster mean distribution, but that is not so; we avoid this requirement by incorporating statistical smoothing over values of nearby explanatory variables. Empirical rules are developed for the choice of smoothing parameter. Numerical simulations, and an application to real data, demonstrate small sample performance for this package of methodology. We also develop theory establishing statistical consistency.

This content is not available in your region!

Continue researching here.

Having issues? You can contact us here