
Explainable AI for Lightweight Network Traffic Classification Using Depthwise Separable Convolutions
Author(s) -
Mustafa Ghaleb,
Mosab Hamdan,
Abdulaziz Y. Barnawi,
Muhammad Gambo,
Abubakar Danasabe,
Saheed Bello,
Aliyu Habib
Publication year - 2025
Publication title -
ieee open journal of the computer society
Language(s) - English
Resource type - Magazines
eISSN - 2644-1268
DOI - 10.1109/ojcs.2025.3576495
Subject(s) - computing and processing
With the rapid growth of internet usage and the increasing number of connected devices, there is a critical need for advanced Network Traffic Classification (NTC) solutions to ensure optimal performance and robust security. Traditional NTC methods, such as port-based analysis and deep packet inspection, struggle to cope with modern network complexities, particularly dynamic port allocation and encrypted traffic. Recently, Convolutional Neural Networks (CNN) and Recurrent Neural Networks (RNN) have been employed to develop classification models to accomplish this task. Existing models for NTC often require significant computational resources due to their large number of parameters, leading to slower inference times and higher memory consumption. To overcome these limitations, we introduce a lightweight NTC model based on Depthwise Separable Convolutions and compare its performance against CNN, RNN, and state-of-the-art models. In terms of computational efficiency, our proposed lightweight CNN exhibits a markedly reduced computational footprint. It utilizes only 30,611 parameters and 0.627 MFLOPS, achieving inference times of 1.49 seconds on the CPU and 0.43 seconds on the GPU. This corresponds to roughly 4× fewer FLOPS than the RNN baseline and 16× fewer than the CNN baseline, while also offering an ultracompact design compared to state-of-the-art models. Such efficiency makes it exceptionally well-suited for real-time applications in resource-constrained environments. In addition, we have integrated eXplainable Artificial Intelligence techniques, specifically LIME and SHAP, to provide valuable insights into model predictions. LIME and SHAP help interpret the contribution of each feature in decision-making, enhancing the transparency and trust in the model's predictions, without compromising its lightweight nature. To support reproducibility and foster collaborative development, all associated code and resources have been made publicly available.
Accelerating Research
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom
Address
John Eccles HouseRobert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom