
Forward feature selection for toxic speech classification using support vector machine and random forest
Author(s) -
Agustinus Bimo Gumelar,
Astri Yogatama,
Derry Pramono Adi,
Frismanda Frismanda,
Indar Sugiarto
Publication year - 2022
Publication title -
iaes international journal of artificial intelligence
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.341
H-Index - 7
eISSN - 2252-8938
pISSN - 2089-4872
DOI - 10.11591/ijai.v11.i2.pp717-726
Subject(s) - support vector machine , random forest , computer science , feature selection , artificial intelligence , python (programming language) , classifier (uml) , pattern recognition (psychology) , feature extraction , machine learning , speech recognition , data mining , operating system
This study describes the methods for eliminating irrelevant features in speech data to enhance toxic speech classification accuracy and reduce the complexity of the learning process. Therefore, the wrapper method is introduced to estimate the forward selection technique based on support vector machine (SVM) and random forest (RF) classifier algorithms. Eight main speech features were then extracted with derivatives consisting of 9 statistical sub-features from 72 features in the extraction process. Furthermore, Python is used to implement the classifier algorithm of 2,000 toxic data collected through the world's largest video sharing media, known as YouTube. Conclusively, this experiment shows that after the feature selection process, the classification performance using SVM and RF algorithms increases to an excellent extent. We were able to select 10 speech features out of 72 original feature sets using the forward feature selection method, with 99.5% classification accuracy using RF and 99.2% using SVM.