z-logo
open-access-imgOpen Access
Invited Commentary: G-Computation-Lost in Translation?
Author(s) -
Stijn Vansteelandt,
Niels Keiding
Publication year - 2011
Publication title -
american journal of epidemiology
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 2.33
H-Index - 256
eISSN - 1476-6256
pISSN - 0002-9262
DOI - 10.1093/aje/kwq474
Subject(s) - standardization , computation , weighting , causal inference , inference , computer science , translation (biology) , point estimation , population , algorithm , mathematics , medicine , econometrics , artificial intelligence , statistics , biochemistry , chemistry , environmental health , messenger rna , gene , radiology , operating system
In this issue of the Journal, Snowden et al. (Am J Epidemiol. 2011;173(7):731-738) give a didactic explanation of G-computation as an approach for estimating the causal effect of a point exposure. The authors of the present commentary reinforce the idea that their use of G-computation is equivalent to a particular form of model-based standardization, whereby reference is made to the observed study population, a technique that epidemiologists have been applying for several decades. They comment on the use of standardized versus conditional effect measures and on the relative predominance of the inverse probability-of-treatment weighting approach as opposed to G-computation. They further propose a compromise approach, doubly robust standardization, that combines the benefits of both of these causal inference techniques and is not more difficult to implement.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here
Accelerating Research

Address

John Eccles House
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom