Journal of Cognitive Human-Computer Interaction

Submit Your Paper

2771-1463ISSN (Online) 2771-1471ISSN (Print)

An Explainable AI-Driven Zero-Day Attack Detection Framework for Securing Edge Devices in Smart Cities

Santhiyakumari N. , Sabarinathan S. , Veerakumar S. , Chandraman M. , Kiruthika G.

The rapid proliferation of edge computing in smart cities has enhanced real-time data processing capabilities, but it has also exposed critical vulnerabilities to sophisticated cyber threats such as zero-day attacks. Traditional signature-based intrusion detection systems often fail to identify these previously unknown threats due to their lack of adaptive intelligence and interpretability. This research proposes an Explainable Artificial Intelligence (XAI)-driven zero-day attack detection framework tailored for edge devices deployed in smart city environments. The proposed system combines deep anomaly detection using a hybrid Convolutional Neural Network–Long Short-Term Memory (CNN–LSTM) model with SHAP (SHapley Additive exPlanations)-based interpretability to detect and explain anomalous behaviors in real-time network traffic. The model is trained on diverse datasets mimicking heterogeneous edge devices in smart infrastructures, ensuring robustness and scalability. Experimental results demonstrate high detection accuracy, low false-positive rates, and strong resilience against unseen attack patterns. Moreover, the integration of XAI components provides actionable insights to administrators, thereby enhancing trust, transparency, and decision-making in cybersecurity operations. This framework marks a significant step toward proactive and explainable security solutions for safeguarding smart urban ecosystems.

Read More

Doi: https://doi.org/10.54216/JCHCI.100201

Vol. 10 Issue. 2 PP. 01-11, (2025)

Explainable Eye-Tracking-Based Cognitive Workload Classification for Interactive Visual Tasks: A Reproducible Human-Computer Interaction Study Using the Public COLET Dataset

Mahmoud A. Zaher , Nabil M. Eldakhly

Attention allocation, efficiency of interactions and the formation of errors during human-computer interaction (HCI) are directly influenced by cognitive workload. Eye tracking provides a feasible, non-invasive source of evidence to estimate workload since the behavior of gaze is strongly correlated with visual search, task processing and decision effort. The paper explores explainable cognitive workload classification based on explainable cognitive workload on the public COLET dataset; eye-tracking recordings of 47 subjects completing interactive search tasks of the visual-search with workload labels based on NASA-TLX. The five supervised learning models are tested on binary and four-class problems, and the most successful setup is analyzed via SHAP-based feature attribution. In both tasks, boosting-based ensembles are best at predictive behavior, with XGBoost scoring highest on the overall and binary low-v-high discrimination scores in the best range of performance reported in the original COLET benchmark. The feature analysis attribute shows that the most significant variables are gaze entropy, fixation time, pupil changes, and saccadic movements. The results are consistent with the application of explainable gaze-based models to adaptive interfaces that can adapt to a rising mental load by making the content simpler to present, varying the pacing, or attentive to important information.

Read More

Doi: https://doi.org/10.54216/JCHCI.100202

Vol. 10 Issue. 2 PP. 12-22, (2025)

Multimodal Cognitive Workload Recognition in Human-Computer Interaction Using Biosignals and Interaction Traces

Andino Maseleno , Kharchenko Raisa , Rahul Chauhan

The process of recognizing cognitive workload requires reliable methods because researchers need to use both physiological indicators and interaction traces while facing challenges of limited data and inconsistent feature sets. The paper develops a multimodal fusion system which uses weight-based reliability assessment to identify three different workload levels from Cognitive Lab data which is publicly accessible. The subset which focuses on workload includes N-Back and mental subtraction tasks together with electroen-cephalography and functional near-infrared spectroscopy and electrocardiography and electrodermal activity and respiration and accelerometry and gaze descriptors and keyboard-mouse interaction indicators. The method conducts separate training for every modality through multidimensional variable reduction which enables gradient-boosted learners to make predictions about branch reliability based on their validation log-loss scores and combine posterior probabilities using normalized reliability weights. The design preserves distinct modality structures while controlling unpredictable branch effects. The study tests different approaches by evaluating single-modality learners against three methods which include direct early fusion and uniform late fusion and the proposed fusion rule. The proposed model achieves its best performance with 0.842 accuracy and 0.836 macro F1-score on the three-class workload task which includes the medium-load category that presents the greatest challenge to differentiate. The research results from class-wise and sensitivity assessments showed that interaction traces together with fNIRS features produced the smallest improvement to the system, and moderate reliability temperatures showed the highest stability in fusion pro-file performance. The feature attribution demonstrates specific emphasis on how cursor-velocity variability together with fNIRS oxygenation slope and EEG theta-band power and fixation-duration statistics and phasic electrodermal activity function as primary discriminative signals. The research findings demonstrate that multiple modal workload estimation needs to be improved through branch-specific modeling which should use decision fusion based on reliability as its foundation model and work through adaptive learning systems which have to handle rising cognitive requirements.

Read More

Doi: https://doi.org/10.54216/JCHCI.100203

Vol. 10 Issue. 2 PP. 23-35, (2025)