
A Comprehensive Survey of Explainable Artificial Intelligence Techniques for Malicious Insider Threat Detection
Author(s) -
Khuloud Saeed Alketbi,
Abid Mehmood
Publication year - 2025
Publication title -
ieee access
Language(s) - English
Resource type - Magazines
SCImago Journal Rank - 0.587
H-Index - 127
eISSN - 2169-3536
DOI - 10.1109/access.2025.3587114
Subject(s) - aerospace , bioengineering , communication, networking and broadcast technologies , components, circuits, devices and systems , computing and processing , engineered materials, dielectrics and plasmas , engineering profession , fields, waves and electromagnetics , general topics for engineers , geoscience , nuclear engineering , photonics and electrooptics , power, energy and industry applications , robotics and control systems , signal processing and analysis , transportation
Malicious insider threats remain a persistent and formidable challenge for organizations, primarily due to their covert nature and the severe impact they can have on critical systems and sensitive data. Traditional detection mechanisms often struggle to uncover such threats, underscoring the need for more intelligent, interpretable, and trustworthy solutions. Although the research community has shown increasing interest in Insider Threat Detection (ITD), existing surveys rarely emphasize the integration of Explainable Artificial Intelligence (XAI) with machine learning (ML) and deep learning (DL) techniques. This oversight has limited progress in understanding how interpretability can improve both detection effectiveness and the trust of security analysts. To address this gap, this survey presents a comprehensive review of the application of XAI in ITD. It explores how ML and DL models, when combined with XAI techniques, can uncover anomalous behaviors and insider actions while enhancing the transparency of model decisions. Tools such as SHAP and LIME are examined for their role in revealing feature contributions and improving analyst insight. The paper also highlights critical data sources—ranging from behavioral logs and network activity to psychometric indicators—that support the development of interpretable detection models. We categorize existing literature based on XAI techniques, data modalities, and threat models, and propose a conceptual framework for aligning XAI methods with specific ITD challenges. Our findings reveal that while XAI enhances interpretability and trust in AI-driven threat detection, several challenges persist. These include class imbalance in datasets, integration of heterogeneous data streams, and the absence of standardized metrics for evaluating explainability in cybersecurity contexts. Finally, the survey identifies key directions for future research, including privacy-preserving AI, human-in-the-loop explainability, and the development of benchmarking frameworks tailored to ITD applications. By offering a structured and up-to-date overview of XAI-enhanced ITD approaches, this work supports the advancement of more transparent, accountable, and operationally effective insider threat detection systems.
Accelerating Research
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom
Address
John Eccles HouseRobert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom