A Feature Selection Technique based on Distributional Differences
Author(s) -
SungDong Kim
Publication year - 2006
Publication title -
journal of information processing systems
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.288
H-Index - 23
eISSN - 2092-805X
pISSN - 1976-913X
DOI - 10.3745/jips.2006.2.1.023
Subject(s) - computer science , feature selection , artificial intelligence , artificial neural network , training set , feature (linguistics) , machine learning , classifier (uml) , range (aeronautics) , pattern recognition (psychology) , data mining , philosophy , linguistics , materials science , composite material
This paper presents a feature selection technique based on distributional differences for efficient machine learning. Initial training data consists of data including many features and a target value. We classified them into positive and negative data based on the target value. We then divided the range of the feature values into 10 intervals and calculated the distribution of the intervals in each positive and negative data. Then, we selected the features and the intervals of the features for which the distributional differences are over a certain threshold. Using the selected intervals and features, we could obtain the reduced training data. In the experiments, we will show that the reduced training data can reduce the training time of the neural network by about 40%, and we can obtain more profit on simulated stock trading using the trained functions as well.
Accelerating Research
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom
Address
John Eccles HouseRobert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom