
An Optimized Feature Regularization in Boosted Decision Tree
Publication year - 2019
Publication title -
international journal of innovative technology and exploring engineering
Language(s) - English
Resource type - Journals
ISSN - 2278-3075
DOI - 10.35940/ijitee.f1202.0486s419
Subject(s) - regularization (linguistics) , decision tree , information gain ratio , computer science , information gain , incremental decision tree , entropy (arrow of time) , artificial intelligence , decision tree learning , machine learning , id3 algorithm , data mining , conditional entropy , word error rate , feature (linguistics) , pattern recognition (psychology) , mathematics , principle of maximum entropy , linguistics , philosophy , physics , quantum mechanics
We put forward a tree regularization, which empowers numerous tree models to do feature collection effectively. The type thought of the regularization system be to punish choosing another feature intended for split when its gain is like the features utilized in past splits. This paper utilized standard data set as unique discrete test data, and the entropy and information gain of each trait of the data was determined to actualize the classification of data. Boosted DT are between the most prominent learning systems being used nowadays. Likewise, this paper accomplished an optimized structure of the decision tree, which is streamlined for improving the efficiency of the algorithm on the reason of guaranteeing low error rate which was at a similar dimension as other classification algorithms