Preprint / Version 1

Explainable AI Framework for Anomaly Detection in Encrypted Network Traffic

##article.authors##

DOI:

https://doi.org/10.31224/5825

Keywords:

Explainable AI, Encrypted Network Traffic, Anomaly Detection, Intrusion Detection Systems, SHAP, LIME, Network Security, Behavioral Analysis.

Abstract

The rapid expansion of encrypted network traffic has improved privacy but also complicated the task of identifying malicious behaviors hidden within protected communication streams. Traditional intrusion detection systems often struggle to interpret encrypted payloads, leading to reduced visibility and higher false-positive rates. This study proposes an Explainable Artificial Intelligence (XAI) framework designed to detect anomalies in encrypted network environments without compromising user privacy. The framework integrates flow-level behavioral features with a hybrid learning pipeline that combines deep representation models and interpretable machine-learning classifiers. To improve transparency, the system incorporates model-agnostic explanation tools such as SHAP and LIME, enabling security analysts to trace how specific traffic attributes contribute to detected anomalies. Experimental evaluations on contemporary encrypted traffic datasets demonstrate that the approach achieves high detection accuracy while offering interpretable outputs that support root-cause analysis. The findings highlight the potential of XAI-driven solutions to enhance trust, accountability, and operational effectiveness in modern security operations centers handling increasingly opaque network environments.

Downloads

Download data is not yet available.

Additional Files

Posted

2025-11-18