Premium
Ensembled sparse‐input hierarchical networks for high‐dimensional datasets
Author(s) -
Feng Jean,
Simon Noah
Publication year - 2022
Publication title -
statistical analysis and data mining: the asa data science journal
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.381
H-Index - 33
eISSN - 1932-1872
pISSN - 1932-1864
DOI - 10.1002/sam.11579
Subject(s) - hyperparameter , computer science , covariate , artificial neural network , artificial intelligence , elastic net regularization , range (aeronautics) , model selection , machine learning , data mining , pattern recognition (psychology) , feature selection , materials science , composite material
In high‐dimensional datasets where the number of covariates far exceeds the number of observations, the most popular prediction methods make strong modeling assumptions. Unfortunately, these methods struggle to scale up in model complexity as the number of observations grows. To this end, we consider using neural networks because they span a wide range of model capacities, from sparse linear models to deep neural networks. Because neural networks are notoriously tedious to tune and train, our aim is to develop a convenient procedure that employs a minimal number of hyperparameters. Our method, Ensemble by Averaging Sparse‐Input hiERarchical networks (EASIER‐net), employs only two L 1 ‐penalty parameters, one that controls the input sparsity and another for the number of hidden layers and nodes. EASIER‐net selects the true support with high probability when there is sufficient evidence; otherwise, it performs variable selection with uncertainty quantification, where strongly correlated covariates are selected at similar rates. On a large collection of gene expression datasets, EASIER‐net achieved higher classification accuracy and selected fewer genes than existing methods. We found that EASIER‐net adaptively selected the model complexity: it fit deep networks when there was sufficient information to learn nonlinearities and interactions and fit sparse logistic models for smaller datasets with less information.