z-logo
open-access-imgOpen Access
Statistical Detection of Adversarial Examples in Blockchain-Based Federated Forest In-Vehicle Network Intrusion Detection Systems
Author(s) -
Ibrahim Aliyu,
Selinde Van Engelenburg,
Muhammed Bashir Mu'Azu,
Jinsul Kim,
Chang Gyoon Lim
Publication year - 2022
Publication title -
ieee access
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.587
H-Index - 127
ISSN - 2169-3536
DOI - 10.1109/access.2022.3212412
Subject(s) - aerospace , bioengineering , communication, networking and broadcast technologies , components, circuits, devices and systems , computing and processing , engineered materials, dielectrics and plasmas , engineering profession , fields, waves and electromagnetics , general topics for engineers , geoscience , nuclear engineering , photonics and electrooptics , power, energy and industry applications , robotics and control systems , signal processing and analysis , transportation
The internet-of-Vehicle (IoV) can facilitate seamless connectivity between connected vehicles (CV), autonomous vehicles (AV), and other IoV entities. Intrusion Detection Systems (IDSs) for IoV networks can rely on machine learning (ML) to protect the in-vehicle network from cyber-attacks. Blockchain-based Federated Forests (BFFs) could be used to train ML models based on data from IoV entities while protecting the confidentiality of the data and reducing the risks of tampering with the data. However, ML models are still vulnerable to evasion, poisoning and exploratory attacks by adversarial examples. The BFF-IDS offers partial defence against poisoning but has no measure for evasion attacks, the most common attack/threat faced by ML models. Besides, the impact of adversarial examples transferability in CAN IDS has largely remained untested. This paper investigates the impact of various possible adversarial examples on the BFF-IDS. We also investigated the statistical adversarial detector’s effectiveness and resilience in detecting the attacks and subsequent countermeasures by augmenting the model with detected samples. Our investigation results established that BFF-IDS is very vulnerable to adversarial examples attacks. The statistical adversarial detector and the subsequent BFF-IDS augmentation (BFF-IDS(AUG)) provide an effective mechanism against the adversarial examples. Consequently, integrating the statistical adversarial detector and the subsequent BFF-IDS augmentation with the detected adversarial samples provides a sustainable security framework against adversarial examples and other unknown attacks.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here