Premium
Simple rules to guide expert classifications
Author(s) -
Jung Jongbin,
Concan Connor,
Shroff Ravi,
Goel Sharad,
Goldstein Daniel G.
Publication year - 2020
Publication title -
journal of the royal statistical society: series a (statistics in society)
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 1.103
H-Index - 84
eISSN - 1467-985X
pISSN - 0964-1998
DOI - 10.1111/rssa.12576
Subject(s) - simple (philosophy) , computer science , construct (python library) , test (biology) , action (physics) , focus (optics) , knowledge base , resource (disambiguation) , scale (ratio) , machine learning , management science , data science , operations research , artificial intelligence , mathematics , programming language , engineering , philosophy , epistemology , paleontology , computer network , physics , quantum mechanics , optics , biology
Summary Judges, doctors and managers are among those decision makers who must often choose a course of action under limited time, with limited knowledge and without the aid of a computer. Because data‐driven methods typically outperform unaided judgements, resource‐constrained practitioners can benefit from simple, statistically derived rules that can be applied mentally. In this work, we formalize long‐standing observations about the efficacy of improper linear models to construct accurate yet easily applied rules. To test the performance of this approach, we conduct a large‐scale evaluation in 22 domains and focus in detail on one: judicial decisions to release or detain defendants while they await trial. In these domains, we find that simple rules rival the accuracy of complex prediction models that base decisions on considerably more information. Further, comparing with unaided judicial decisions, we find that simple rules substantially outperform the human experts. To conclude, we present an analytical framework that sheds light on why simple rules perform as well as they do.