Journal of Cognitive Human-Computer Interaction

Journal DOI

https://doi.org/10.54216/JCHCI

Submit Your Paper

2771-1463ISSN (Online) 2771-1471ISSN (Print)

An Explainable AI-Driven Zero-Day Attack Detection Framework for Securing Edge Devices in Smart Cities

Santhiyakumari N. , Sabarinathan S. , Veerakumar S. , Chandraman M. , Kiruthika G.

The rapid proliferation of edge computing in smart cities has enhanced real-time data processing capabilities, but it has also exposed critical vulnerabilities to sophisticated cyber threats such as zero-day attacks. Traditional signature-based intrusion detection systems often fail to identify these previously unknown threats due to their lack of adaptive intelligence and interpretability. This research proposes an Explainable Artificial Intelligence (XAI)-driven zero-day attack detection framework tailored for edge devices deployed in smart city environments. The proposed system combines deep anomaly detection using a hybrid Convolutional Neural Network–Long Short-Term Memory (CNN–LSTM) model with SHAP (SHapley Additive exPlanations)-based interpretability to detect and explain anomalous behaviors in real-time network traffic. The model is trained on diverse datasets mimicking heterogeneous edge devices in smart infrastructures, ensuring robustness and scalability. Experimental results demonstrate high detection accuracy, low false-positive rates, and strong resilience against unseen attack patterns. Moreover, the integration of XAI components provides actionable insights to administrators, thereby enhancing trust, transparency, and decision-making in cybersecurity operations. This framework marks a significant step toward proactive and explainable security solutions for safeguarding smart urban ecosystems.

Read More

Doi: https://doi.org/10.54216/JCHCI.100201

Vol. 10 Issue. 2 PP. 01-11, (2025)

Explainable Eye-Tracking-Based Cognitive Workload Classification for Interactive Visual Tasks: A Reproducible Human-Computer Interaction Study Using the Public COLET Dataset

Mahmoud A. Zaher , Nabil M. Eldakhly

Attention allocation, efficiency of interactions and the formation of errors during human-computer interaction (HCI) are directly influenced by cognitive workload. Eye tracking provides a feasible, non-invasive source of evidence to estimate workload since the behavior of gaze is strongly correlated with visual search, task processing and decision effort. The paper explores explainable cognitive workload classification based on explainable cognitive workload on the public COLET dataset; eye-tracking recordings of 47 subjects completing interactive search tasks of the visual-search with workload labels based on NASA-TLX. The five supervised learning models are tested on binary and four-class problems, and the most successful setup is analyzed via SHAP-based feature attribution. In both tasks, boosting-based ensembles are best at predictive behavior, with XGBoost scoring highest on the overall and binary low-v-high discrimination scores in the best range of performance reported in the original COLET benchmark. The feature analysis attribute shows that the most significant variables are gaze entropy, fixation time, pupil changes, and saccadic movements. The results are consistent with the application of explainable gaze-based models to adaptive interfaces that can adapt to a rising mental load by making the content simpler to present, varying the pacing, or attentive to important information.

Read More

Doi: https://doi.org/10.54216/JCHCI.100202

Vol. 10 Issue. 2 PP. 12-22, (2025)