z-logo
Premium
Standardized Visual Predictive Check Versus Visual Predictive Check for Model Evaluation
Author(s) -
Wang Diane D.,
Zhang Shuzhong
Publication year - 2012
Publication title -
the journal of clinical pharmacology
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.92
H-Index - 116
eISSN - 1552-4604
pISSN - 0091-2700
DOI - 10.1177/0091270010390040
Subject(s) - covariate , percentile , computer science , statistics , data mining , mathematics , artificial intelligence , machine learning
The visual predictive check (VPC) is a commonly used approach in model evaluation. However, it may not be feasible to conduct a VPC, or the results of a VPC could be misleading in certain situations. The objectives of the present study were to (1) examine the performance and applicability of the VPC and (2) propose the standardized visual predictive check (SVPC) as an alternative/complementary approach to the VPC. The difference between the SVPC and normalized prediction distribution error (npde) as visual tools for model evaluation is also discussed. The results of the simulation studies demonstrate that the VPC is not appropriate when stratification of covariate(s) in a model is difficult or arbitrary and may not be feasible when study design varies during a study/among participants. The SVPC addresses these issues by displaying the percentiles (P i,j ) of each participant's observations in the marginal distribution of the corresponding model‐simulated endpoints as a function of time (or any covariate of interest) based on that participant's own design template. Since the calculation of P i,j factors out subject‐specific design features, the difference between observation and simulated values is only caused by misspecification of the structure model and/or inadequate estimation of random effect. Thus, the SVPC can be used in any situation.

This content is not available in your region!

Continue researching here.

Having issues? You can contact us here