z-logo
open-access-imgOpen Access
Impact of Parameter Tuning for Optimizing Deep Neural Network Models for Predicting Software Faults
Author(s) -
Mansi Gupta,
Kumar Rajnish,
Vandana Bhattacharjee
Publication year - 2021
Publication title -
scientific programming
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.269
H-Index - 36
eISSN - 1875-919X
pISSN - 1058-9244
DOI - 10.1155/2021/6662932
Subject(s) - hyperparameter , computer science , artificial neural network , artificial intelligence , machine learning , classifier (uml) , deep learning , regularization (linguistics) , deep neural networks , software , data mining , programming language
Deep neural network models built by the appropriate design decisions are crucial to obtain the desired classifier performance. This is especially desired when predicting fault proneness of software modules. When correctly identified, this could help in reducing the testing cost by directing the efforts more towards the modules identified to be fault prone. To be able to build an efficient deep neural network model, it is important that the parameters such as number of hidden layers, number of nodes in each layer, and training details such as learning rate and regularization methods be investigated in detail. The objective of this paper is to show the importance of hyperparameter tuning in developing efficient deep neural network models for predicting fault proneness of software modules and to compare the results with other machine learning algorithms. It is shown that the proposed model outperforms the other algorithms in most cases.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here
Accelerating Research

Address

John Eccles House
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom