z-logo
Premium
Smoothing Observational Data: A Philosophy and Implementation for the Health Sciences
Author(s) -
Greenland Sander
Publication year - 2006
Publication title -
international statistical review
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 1.051
H-Index - 54
eISSN - 1751-5823
pISSN - 0306-7734
DOI - 10.1111/j.1751-5823.2006.tb00159.x
Subject(s) - smoothing , observational study , computer science , data mining , data science , statistics , mathematics , computer vision
Summary Standard statistical methods (such as regression analysis) presume the data are generated by an identifiable random process, and attempt to model that process in a parsimonious fashion. In contrast, observational data in the health sciences are generated by complex, nonidentified, and largely nonrandom mechanisms, and are analyzed to form inferences on latent structures. Despite this gap between the methods and reality, most observational data analysis comprises application of standard methods, followed by narrative discussion of the problems of entailed by doing so. Alternative approaches employ latent‐structure models that include components for nonidentified mechanisms. Standard methods can still be useful, however, provided their modeling philosophy is modified to encourage preservation of structure, rather than achieving parsimonious description. With this modification they can be viewed as smoothing or filtering methods for separating noise from signal before the task of latent‐structure modeling begins. I here give a detailed justification of this view, and a hierarchical‐modeling implementation that can be carried out with popular software. Concepts are illustrated in the smoothing of a contingency table from an analysis of magnetic fields and childhood leukemia.

This content is not available in your region!

Continue researching here.

Having issues? You can contact us here