
How Principled and Practical Are Penalised Complexity Priors?
Author(s) -
Christian P. Robert,
Judith Rousseau
Publication year - 2017
Publication title -
statistical science
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 2.204
H-Index - 108
eISSN - 2168-8745
pISSN - 0883-4237
DOI - 10.1214/16-sts603
Subject(s) - prior probability , overfitting , computer science , component (thermodynamics) , bayesian probability , connection (principal bundle) , artificial intelligence , machine learning , mathematical economics , mathematics , physics , geometry , artificial neural network , thermodynamics
International audienceThis note discusses the paper "Penalising model component complexity" by Simpson et al. (2017). While we acknowledge the highly novel approach to prior construction and commend the authors for setting new-encompassing principles that will Bayesian modelling, and while we perceive the potential connection with other branches of the literature, we remain uncertain as to what extent the principles exposed in the paper can be developed outside specific models, given their lack of precision. The very notions of model component, base model, overfitting prior are for instance conceptual rather than mathematical and we thus fear the concept of penalised complexity may not further than extending first-guess priors into larger families, thus failing to establish reference priors on a novel sound ground