z-logo
open-access-imgOpen Access
L 2, 1‐Norm Regularized Matrix Completion for Attack Detection in Collaborative Filtering Recommender Systems
Author(s) -
Si Mingdan,
Li Qingshan
Publication year - 2019
Publication title -
chinese journal of electronics
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.267
H-Index - 25
eISSN - 2075-5597
pISSN - 1022-4653
DOI - 10.1049/cje.2019.06.010
Subject(s) - collaborative filtering , recommender system , matrix completion , computer science , matrix norm , norm (philosophy) , matrix (chemical analysis) , information retrieval , philosophy , chemistry , chromatography , physics , eigenvalues and eigenvectors , quantum mechanics , gaussian , computational chemistry , epistemology
Collaborative filtering recommender systems (CFRSs) are known to be highly vulnerable to profile injection attacks, in which malicious users insert fake profiles into the rating database in order to bias the systems' output, since their openness, and attack detection is still a challenging problem in CFRSs. In order to provide more accurate recommendations, many schemas have been proposed to detect such shilling attacks. However, almost all of them are proposed to detect one or several specific attack types, and few of them can handle hybrid attack types, which usually happen in practice. With this problem in mind, we propose a novel L 2, 1‐norm regularized matrix completion incorporating prior information (LRMCPI) model to detect shilling attacks by combining matrix completion and L 2, 1‐norm. The proposed LRMCPI formalizes the attack detection problem as a missing value estimation problem, and it is appropriate because the user‐item rating matrix is approximately low‐rank and attack profiles could be considered as structural noise. The proposed LRMCPI model not only can better recover the rating matrix using correct rating value but also can detect the positions where the attackers are injected. We evaluate our model on three well‐known data sets with different density and the experimental results show that our model outperforms baseline algorithms in both single and hybrid attack types.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here