z-logo
open-access-imgOpen Access
Making Sense of Model Generalizability: A Tutorial on Cross-Validation in R and Shiny
Author(s) -
Q. Chelsea Song,
Chen Tang,
Serena Wee
Publication year - 2021
Publication title -
advances in methods and practices in psychological science
Language(s) - English
Resource type - Journals
eISSN - 2515-2467
pISSN - 2515-2459
DOI - 10.1177/2515245920947067
Subject(s) - generalizability theory , overfitting , sample (material) , computer science , sample size determination , model validation , statistical model , cross validation , artificial intelligence , psychology , statistics , data science , mathematics , artificial neural network , chemistry , chromatography
Model generalizability describes how well the findings from a sample are applicable to other samples in the population. In this Tutorial, we explain model generalizability through the statistical concept of model overfitting and its outcome (i.e., validity shrinkage in new samples), and we use a Shiny app to simulate and visualize how model generalizability is influenced by three factors: model complexity, sample size, and effect size. We then discuss cross-validation as an approach for evaluating model generalizability and provide guidelines for implementing this approach. To help researchers understand how to apply cross-validation to their own research, we walk through an example, accompanied by step-by-step illustrations in R. This Tutorial is expected to help readers develop the basic knowledge and skills to use cross-validation to evaluate model generalizability in their research and practice.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here