Our paper "Evaluating feature relevance XAI in network intrusion detection" has been accepted at xAI 2023
07/17/2023We rigorously evaluate commonly used approaches that explain the decisions of network attack detection systems.
Our paper evaluates commonly used explainable AI approaches when used on network intrusion detectors. We are happy to be a part of "The 1st World Conference on eXplainable Artificial Intelligence" (xAI 2023)
Abstract
As machine learning models become increasingly complex, there is a growing need for explainability to understand and trust the decision-making processes. In the domain of network intrusion detection, post-hoc feature relevance explanations have been widely used to provide insight into the factors driving model decisions. However, recent research has highlighted challenges with these methods when applied to anomaly detection, which can vary in importance and impact depending on the application domain. In this paper, we investigate the challenges of post-hoc feature relevance explanations for network intrusion detection, a critical area for ensuring the security and integrity of computer networks. To gain a deeper understanding of these challenges for the application domain, we quantitatively and qualitatively investigate the popular feature relevance approach SHAP when explaining different network intrusion detection approaches. We conduct experiments to jointly evaluate detection quality and explainability, and explore the impact of replacement data, a commonly overlooked hyperparameter of post-hoc feature relevance approaches. We find that post-hoc XAI can provide high quality explanations, but requires a careful choice of its replacement data as default settings and common choices do not transfer across different detection models. Our study showcases the viability of post-hoc XAI for network intrusion detection systems, but highlights the need for rigorous evaluations of produced explanations.