z-logo
Premium
Privacy‐preserving multisource transfer learning in intrusion detection system
Author(s) -
Xu Mengfan,
Li Xinghua,
Wang Yunwei,
Luo Bin,
Guo Jingjing
Publication year - 2021
Publication title -
transactions on emerging telecommunications technologies
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.366
H-Index - 47
ISSN - 2161-3915
DOI - 10.1002/ett.3957
Subject(s) - computer science , homomorphic encryption , transfer of learning , ciphertext , plaintext , upload , overhead (engineering) , encryption , paillier cryptosystem , scheme (mathematics) , cloud computing , artificial intelligence , intrusion detection system , feature (linguistics) , data mining , machine learning , computer network , public key cryptography , hybrid cryptosystem , mathematical analysis , linguistics , philosophy , mathematics , operating system
Abstract The increasing scale of the network and the demand for data privacy‐preserving have brought several challenges for existing intrusion detection schemes, which presents three issues: large computational overhead, long training period, and different feature distribution which leads low model performance. The emergence of transfer learning has solved the above problems. However, the existing transfer learning‐based schemes can only operate in plaintext when different domains and clouds are untrusted entities, the privacy during data processing cannot be preserved. Therefore, this paper designs a privacy‐preserving multi‐source transfer learning intrusion detection system (IDS). Firstly, we used the Paillier homomorphic to encrypt models which trained from different source domains and uploaded to the cloud. Then, based on privacy‐preserving scheme, we first proposed a multisource transfer learning IDS based on encrypted XGBoost (E‐XGBoost). The experimental results show that the proposed scheme can successfully transfer the encryption models from multiple source domains to the target domain, and the accuracy rate can reach 93.01% in ciphertext, with no significant decrease in detection performance compared with works in plaintext. The training time of the model is significantly reduced from the traditional hour‐level to the minute‐level.

This content is not available in your region!

Continue researching here.

Having issues? You can contact us here