z-logo
Premium
One‐class classification using a support vector machine with a quasi‐linear kernel
Author(s) -
Liang Peifeng,
Li Weite,
Tian Hao,
Hu Jinglu
Publication year - 2019
Publication title -
ieej transactions on electrical and electronic engineering
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.254
H-Index - 30
eISSN - 1931-4981
pISSN - 1931-4973
DOI - 10.1002/tee.22826
Subject(s) - support vector machine , artificial intelligence , computer science , autoencoder , pattern recognition (psychology) , kernel method , kernel (algebra) , piecewise linear function , decision boundary , feature vector , classifier (uml) , linear classifier , machine learning , artificial neural network , mathematics , geometry , combinatorics
This article proposes a novel method for one‐class classification based on a divide‐and‐conquer strategy to improve the one‐class support vector machine (SVM). The idea is to build a piecewise linear separation boundary in the feature space to separate the data points from the origin, which is expected to have a more compact region in the input space. For the purpose, the input space of the dataset is first divided into a group of partitions by using a partitioning mechanism of top s % winner‐take‐all autoencoder. A gated linear network is designed to implement a group of linear classifiers for each partition, in which the gate signals are generated from the autoencoder. By applying a one‐class SVM (OCSVM) formulation to optimize the parameter set of the gated linear network, the one‐class classifier is implemented in an exactly same way as a standard OCSVM with a quasi‐linear kernel composed using a base kernel with the gate signals. The proposed one‐class classification method is applied to different real‐world datasets, and simulation results show that it shows a better performance than a traditional OCSVM. © 2018 Institute of Electrical Engineers of Japan. Published by John Wiley & Sons, Inc.

This content is not available in your region!

Continue researching here.

Having issues? You can contact us here