z-logo
open-access-imgOpen Access
Evaluating Dropout Placements in Bayesian Regression Resnet
Author(s) -
Lei Shi,
Cosmin Copot,
Steve Vanlanduit
Publication year - 2021
Publication title -
journal of artificial intelligence and soft computing research
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.691
H-Index - 16
eISSN - 2449-6499
pISSN - 2083-2567
DOI - 10.2478/jaiscr-2022-0005
Subject(s) - dropout (neural networks) , computer science , artificial intelligence , artificial neural network , bayesian probability , inference , regression , machine learning , statistics , mathematics
Deep Neural Networks (DNNs) have shown great success in many fields. Various network architectures have been developed for different applications. Regardless of the complexities of the networks, DNNs do not provide model uncertainty. Bayesian Neural Networks (BNNs), on the other hand, is able to make probabilistic inference. Among various types of BNNs, Dropout as a Bayesian Approximation converts a Neural Network (NN) to a BNN by adding a dropout layer after each weight layer in the NN. This technique provides a simple transformation from a NN to a BNN. However, for DNNs, adding a dropout layer to each weight layer would lead to a strong regularization due to the deep architecture. Previous researches [1, 2, 3] have shown that adding a dropout layer after each weight layer in a DNN is unnecessary. However, how to place dropout layers in a ResNet for regression tasks are less explored. In this work, we perform an empirical study on how different dropout placements would affect the performance of a Bayesian DNN. We use a regression model modified from ResNet as the DNN and place the dropout layers at different places in the regression ResNet. Our experimental results show that it is not necessary to add a dropout layer after every weight layer in the Regression ResNet to let it be able to make Bayesian Inference. Placing Dropout layers between the stacked blocks i.e. Dense+Identity+Identity blocks has the best performance in Predictive Interval Coverage Probability (PICP). Placing a dropout layer after each stacked block has the best performance in Root Mean Square Error (RMSE).

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here