Journal of Intelligent Systems and Internet of Things

Journal DOI

https://doi.org/10.54216/JISIoT

Submit Your Paper

2690-6791ISSN (Online) 2769-786XISSN (Print)

Blockchain-Enabled Beamforming Optimization in 6G-IoT Using ConvMarkov and Laplacian Eigenmaps

Saleh Ali Alomari

In an increasingly fast-paced world of 6G-IoT networks, optimal beamforming techniques will be effective in improving strength, latency, and quality of service delivery in the networks. This work presents a new paradigm in beamforming optimization, especially in tackling dynamic environments and high computational costs in existing approaches. The problems of long training times with traditional methods, along with threats in security make them out rightly less applicable for real time applications. The data is collected from 6G IoT networks then, Laplacian Eigenmaps is used for feature extraction and modelling in time and applied for dimensionality reduction, ConvMarkov is used for model development RC4 encryption secures data exchange, while blockchain supports data logging and promotes transparency. This is a combination of deep learning techniques and advanced encryption methods, which will lead to a wide boost in beamforming efficiency, flexibility, and security. This study achieved the beamforming optimization achieved 97% accuracy with significant gain improvements, as indicated by an ROC curve (AUC = 0.9970) and precision-recall curve. The training loss stabilized below 0.01, while the validation loss fluctuated above 0.1, suggesting minor overfitting. The main achievements converge on proving improvements in optimization under real time conditions in a network, besides integrity and privacy of data. These become great merits into a strong solution for future 6G.

Read More

Doi: https://doi.org/10.54216/JISIoT.180201

Vol. 18 Issue. 2 PP. 01-19, (2026)

A New Descriptor for Improving Lightweight Blockchain Environment Using a Hybrid GWO-Levy-GRU Framework for Nonce Discovery

Rasha Hani Salman , Hala Bahjat Abdul Wahab

Blockchain technology has recently emerged as a fundamental pillar of decentralized and secure systems. However, many Proof-of-Work (POW) algorithms suffer from some challenges, including their inefficiency in discovering the value of Nonces due to their reliance on random attempts, which consume significant resources, energy, and time, making them difficult to use in lightweight blockchain environments, especially in resource-limited environments such as mobile devices and others. The main goal of this paper is to introduce a smart system that replaces random guessing with a more intelligent and predictive approach using deep learning models like CNN2D, GRU, LSTM, and hybrid models. The intelligent optimization algorithm (GWO) is also used, which has been enhanced with random Lévy jumps, in addition to improved clustering using a genetic algorithm. The results, after applying the system to health data across three difficulty levels (4, 6, and 8), showed that the intelligent neural model was the most stable and accurate, achieving the lowest error values ​​and the highest generalization ability, with a maximum error value of (0.0136) at the highest difficulty level (8). The hybrid GA–KMeans algorithm demonstrated high efficiency in improving clustering accuracy. It achieved the highest similarity index value (0.9980) and the lowest Davis-Bolden index value (0.0000), which plays a significant role in guiding searches efficiently and effectively. The CNN2D model also achieved ideal numerical results, but it is prone to overlearning, while the GRU neural model provided an efficient balance between stability and accuracy. Other hybrid models, such as GRU+CNN, have shown excellent performance, but with varying results. The proposed system proves to be an efficient and intelligent alternative to the low-cost random approach for Nonce discovery in lightweight blockchain environments.

Read More

Doi: https://doi.org/10.54216/JISIoT.180202

Vol. 18 Issue. 2 PP. 20-35, (2026)

A New Strategy for Exploration and Area Coverage Using Swarm Robots by Enhancing the Pelican Optimization Algorithm

Dena Kadhim Muhsen , Ahmed T. Sadiq , Firas Abdulrazzaq Raheem

Area coverage and exploration of unknown environments by swarm robots autonomously is one of the challenges in the robotics domain. This paper proposes a new strategy for area coverage in two parts; firstly, enhancing a Pelican Optimization Algorithm (POA) using swarm robots to explore an unknown area. Secondly, merges many algorithms with the proposed POA, such as Timed Elastic Band (TEB) as a local planner for obstacle avoidance, SLAM (Simultaneous Localization and Mapping), and a training model which is called You Only Look Once version 8 nano (YOLOv8n) for person detection. The proposed POA algorithm successfully monitored a large area and achieved a high exploration ratio with minimal time. In this work, the new strategy is applied to a robot warehouse environment, utilizing a swarm of robots to explore the area and find targets, which are employees suffocated by the effects of chemical pollution. The simulation and real-world tests for a new strategy were done in the Robot Operating System (ROS) using the TurtleBot3 robot. The total time-consuming for exploration and detection time is less by POA, while the coverage ratio is the largest when compared with the original RRT exploration algorithm for empty, small, and large environments, respectively.

Read More

Doi: https://doi.org/10.54216/JISIoT.180203

Vol. 18 Issue. 2 PP. 36-59, (2026)

Power Consumption Prediction Using a CNN-LSTM-Attention Hybrid Deep Learning Model

Nebras Jalel Ibrahim , Samah Faris Kamil , Ghasaq Saad Jameel

Reducing energy losses and increasing power grid efficiency need accurate prediction of power consumption accurate prediction of future energy consumption requires the use of time series data. To overcome the shortcomings of conventional techniques for forecasting energy consumption in India for the period from 2 January, 2019 to 23 May, 2020, we used an attention mechanism, which is still relatively new and not well known. In this paper, we propose a new approach for predicting energy consumption by combining local feature extraction with convolutional neural networks (CNNs), long short-term memory (LSTM) to capture long-term temporal dependencies, and attention mechanisms to deal with the issue of information loss brought on by extremely lengthy input time series data. After high-dimensional features are extracted from the input data using a one-dimensional CNN layer, temporal correlations within historical sequences are captured using an LSTM layer.  In order to optimize the weighting of the LSTM outputs, strengthen the impact of important information, and enhance the prediction model as a whole, an attention mechanism is finally implemented. This integration improves the model's ability to represent complex spatio-temporal patterns. The mean absolute error (MAE) and root mean square error (RMSE) are used to assess the performance of the proposed model. The results demonstrate that the CNN-LSTM-Attention model outperforms conventional hybrid CNN-LSTM and LSTM models, demonstrating superior performance across a range of prediction scenarios. By supporting more reliable grid management, proactive intervention methods, and predictive maintenance, these developments contribute to reducing load imbalances and energy waste in India. The Future developments could see the proposed model extended to other time series prediction domains.

Read More

Doi: https://doi.org/10.54216/JISIoT.180204

Vol. 18 Issue. 2 PP. 60-71, (2026)

Developing a Fast Hybrid Metaheuristic Algorithm to Enhance the Efficiency of Resource-Constrained Applications

Alaa Abdalqahar Jihad , Ahmed Subhi Abdalkafor , Sameeh Abdulghafour Jassim

The rapid development of intelligent computing has led to Internet of Things (IoT) applications and embedded devices suffering from severe constraints on energy, processing, and memory. This calls for fast and lightweight algorithms that maintain performance accuracy without draining resources or affecting response time. This paper presents a new hybrid metaheuristic algorithm that combines the advantages of four optimization algorithms to achieve efficient results and reduce computational complexity without compromising output quality. Experiments demonstrate significant improvements in performance and execution time compared to traditional algorithms, in addition to the algorithm's ability to scale and handle diverse workloads. The lowest improvement of the proposed algorithm compared to other algorithms was approximately 25.7%. This algorithm opens up prospects for effective applications in smart systems in urban and industrial areas.

Read More

Doi: https://doi.org/10.54216/JISIoT.180205

Vol. 18 Issue. 2 PP. 72-84, (2026)

Multi-Variable Markov Framework for Predicting Battery Depletion in Wireless Sensor Networks

Deden Ardiansyah , Moestafid , Teddy Mantoro

Wireless Sensor Networks (WSNs) support intelligent data acquisition systems across environmental monitoring, industrial automation, and smart cities. As a fundamental enabler of the Internet of Things (IoT), WSNs rely heavily on battery-powered sensor nodes for sustained operation in dynamic and often remote environments. However, predicting battery lifetime in WSNs remains a critical challenge due to the complex interplay between environmental conditions and operational behaviors. Conventional energy models often fail to consider the simultaneous influence of temperature, humidity, and data traffic intensity on battery depletion rates. This study proposes a battery lifetime prediction model based on a Markov framework integrated with an exponential energy consumption function to address this issue. The model incorporates three primary variables—ambient temperature, relative humidity, and data movement to simulate energy usage dynamically. The framework calculates transition probabilities and energy load based on environmental states, enabling accurate forecasting. Additionally, the model evaluates the impact of different battery chemistries (Ni-MH, LiPo, Li-ion, and Alkaline) on lifespan performance across varying environmental scenarios. Simulation results reveal that temperature and humidity significantly influence energy depletion, while data transmission intensity plays a supporting role in high-traffic cases. LiPo and Li-ion batteries demonstrate superior performance and stability, especially under extreme environmental conditions. This study contributes a novel multi-variable model that bridges physical sensing environments with predictive battery analytics. The findings provide a foundation for strategic energy planning and adaptive deployment of WSNs in sustainability-critical applications.

Read More

Doi: https://doi.org/10.54216/JISIoT.180206

Vol. 18 Issue. 2 PP. 85-98, (2026)

Exposing Image Tampering: A Deep Learning Approach to Copy-Move Forgery Detection for Secure Digital Image Forensics

Nadia Mahmood Ali , Sameer Abdulsttar Lafta , Amaal Ghazi Hamad Rafash

Nowadays, with the proliferation of mobile devices and the internet around the world that are available for everyone, and due to the low prices versus their high capabilities, images are considered one of the most common ways of transmitting information between users, advancement of image processing and editing tools, simplified the process of editing and changing photographs such as in magazines, newspapers, scientific journals, and on social media or on the Internet. As a result, the propagation of manipulated photographs that misrepresent the truth is prevalent, whether deliberate or inadvertent. We propose a method that uses deep learning based convolutional neural network in order to detect instances of the copy-move forgeries in images which can  help to ensure data authenticity in digital forensic investigations. In this case, our method is intended to improve digital evidence integrity by detecting complicated changes quickly and precisely. This work can supports cybersecurity applications like anti-fraud systems, fake news detection, and social media forensics. The findings of the experiment demonstrate that the suggested approach is capable of detecting forgery against multiple copies and post-processing activities. The dataset's images used for both training and testing are MICC-F2000, composed of 2,000 images, 700 tamper and 1,300 originals. The findings indicate a testing accuracy of 98.00% and a training accuracy of 99.17%.

Read More

Doi: https://doi.org/10.54216/JISIoT.180207

Vol. 18 Issue. 2 PP. 99-110, (2026)

Criminal Activity Classification in Surveillance Videos Using Deep Learning Models

Raed Majeed , Hiyam Hatem

Detecting and identifying crimes in real time represents a very necessary aspect of public safety. Traditional systems are human based monitoring cameras, video surveillance systems are ineffective, time consuming and prone to mistakes. Automated solutions are much needed. Using convolutional neural networks (CNNs) to efficiently examine surveillance video footage is the main goal. This work presents a crime detection system based on deep learning. the study utilize UCF Crime dataset and four deep learning models: ResNet50, EfficientNetB2, Xception, and custom (CNN) were up-graded, trained, and tested. To guarantee best model performance, the suggested approaches required careful dataset preparation, pre-processing, and strategic data separation. By means of fine-tuning, each model addressed the constraints of conventional techniques and enhanced feature extraction and classification accuracy. With extraordinary performance measures of (99.53%) accuracy, (99.07%) precision, (98.43%) recall, and a (98.69%) F1 score, experimental findings show the superiority of the suggested system. These findings reveal the system’s high dependability in detecting and classifying criminal events, thereby far surpassing other CNN-based approaches. The model runs at an average inference speed of (30 ms per frame on CPU), with a lightweight model size of around (20 MB), These results demonstrate the system’s scalability, efficiency, and strong potential for intelligent surveillance applications. This study shows how scalable and effective deep learning models transform crime detection in surveillance systems to support public safety.

Read More

Doi: https://doi.org/10.54216/JISIoT.180208

Vol. 18 Issue. 2 PP. 111-121, (2026)

Fault Monitoring in Transmission Lines Using Modular Neural Networks in Simulated Smart Grids

Sánchez-Juárez J. R. , Aldana-Franco R. , Leyva-Retureta, J. G. , Álvarez-.Sánchez E. J. , López-Velázquez A. , Aldana-Franco F.

The transmission of energy is one of the main tasks of Electrical Engineering. Transmission lines are used for this purpose, which are susceptible to various problems such as short-circuit, overload, open circuit, and complex faults. From the perspective of smart grids, one of the open challenges is to have autonomous systems that allow the detection, classification, and location of faults in transmission lines. On the other hand, Artificial Neural Networks are computational tools used in classification and control tasks to be applied to different plants and systems. There are several ways to solve problems using ANNs; one is modularity. This strategy consists of dividing the problem into components that are easier to classify. In this way, a modular system is proposed that is composed of three ANNs: One for detection, one for classification, and one more for the location of faults in transmission lines. A simulation model of a three-phase electrical power system was built using Simulink MATLAB, employing a data transmission approach typical of smart grids. Supervised learning and WEKA software were used for network training. Databases were created using the potential difference and line current, as well as the ground fault impedance. The database was developed through cases and mathematical models, and the performance of the networks was evaluated in the simulated model. The results show that the proposed model allows the identification of all cases presented in the test stage (100%), which is a better performance than a single neural network (81.25%) that is responsible for detecting, classifying, and locating faults.

Read More

Doi: https://doi.org/10.54216/JISIoT.180209

Vol. 18 Issue. 2 PP. 122-129, (2026)

Features Extraction Improvement for Facial Expression Recognition Using HOG and Machine Learning Techniques

Dhiaa M. Abed , Awab Qasim Karamanj , Thura J. Mohammed , Saja B. Attallah , Abusnina M. Mukhtar

Facial Expression Recognition (FER) is a vital aspect of human-computer interaction with applications in healthcare, education security, and affective computing. Even with the success of deep learning, generalizability, interpretability, and efficiency of most systems, especially in uncontrolled settings, are still problematic. In this study, we propose an enhanced feature extraction technique based on Histograms of Oriented Gradient (HOG) where the central difference operator, not the conventional forward difference, used for gradient estimation. The modification enhances the accuracy of gradients, reduces truncation error, and leads to more stable facial feature descriptors. The enhanced HOG is tested on five popular datasets, CK+, JAFFE, MMI, ExpW, and AffectNet, using three traditional Machine Learning (ML) classifiers: Support Vector Machine (SVM), K-Nearest Neighbors (KNN), and Random Forest (RF). Experimental results indicate uniform accuracy enhancements across all the classifiers and datasets, with improvements spiking to 7%–10% and recall and F1-score also witnessing marked increases. In this study, RF registered the maximum accuracy, 97.94%, on CK+ and 95.48% on AffectNet, hence solidifying its stability and dependability. This study shows how well mathematical optimization works with classical ML for FER. The approach we suggest provides an easy-to-understand, small, and quick alternative to deep models, making it perfect for real-time and resource-limited applications.

Read More

Doi: https://doi.org/10.54216/JISIoT.180210

Vol. 18 Issue. 2 PP. 130-141, (2026)

A Hybrid Deep Learning and Fuzzy Logic Framework for PM10 Concentration Forecasting in Istanbul

Rusul Al-bayati , Ülkü Alver Şahin , Hüseyin Toros

Air pollution, especially atmospheric particulate matter with aerodynamic diameters smaller than 10 micrometers (PM10), is one of the constant and serious environmental challenges in urban areas. Its consequences range from negative human health effects to broader ecological disruptions. With the increasing necessity of accurate and trustworthy forecasting devices in the sphere of air quality assessment, we propose a new hybrid-modeling platform that merges the sequential pattern recognition ability of Long Short Term Memory (LSTM) neural networks with fuzzy logic reasoning. The two approaches implemented in this model complement each other: while approaches taking into account the time dependence of the behavior of air pollutants address the complex temporal dynamics present in the problem, methods based on uncertainty propagate inherent uncertainties in the meteorological and environmental data. The model was trained using a well-structured, multi-variable dataset of hourly air quality and meteorological observations for five years (2019–2023) measured in Istanbul and further tested of January 2024 data. The hybrid approach outperformed all tested environments in prediction output, reaching an accuracy of 98% at the Aksaray traffic station, whereas standalone LSTM (97%) and fuzzy logic (94%) models performed lower. Importantly, it identified minute periodicity and pollution peaks with high fidelity and demonstrated robustness across diverse settings such as traffic-dense, industrial, rural and urban zones. These results place the hybrid LSTM–Fuzzy Logic model as a trusted and robust forecasting tool for predicting PM10 concentrations, providing valuable assistance to environmental policy-makers, urban planners, and public health authorities in efforts to reduce air pollution and protect the health of the population.

Read More

Doi: https://doi.org/10.54216/JISIoT.180211

Vol. 18 Issue. 2 PP. 142-156, (2026)

Design and Optimization of Energy-Efficient Wireless Sensor Networks for Industrial Automation

Maha A. Hutaihit , Samir I. Badrawi , Haider Makki Alzaki , Riyadh Khlf Ahmed , Marwa Falah Hasan

To enhance the efficiency of edge-integrated Industrial IoT (IIoT) networks, this paper proposes a deep learning-based resource-scheduling framework for optimized asset booking in Wireless Sensor Networks (WSNs). The novelty of this work lies in the integration of a hybrid Convolutional Neural Network (CNN) and Gated Recurrent Unit (GRU) model, which enables intelligent allocation of computational resources based on real-time asset demand characteristics. The proposed model is evaluated using the Intel Berkeley WSN dataset and demonstrates superior performance in terms of latency reduction, execution time, and resource utilization compared to conventional approaches such as Genetic Algorithm (GA), Improved Particle Swarm Optimization (IPSO), Long Short-Term Memory (LSTM), and Bidirectional Recurrent Neural Network (BRNN). With a maximum efficiency of 99.48% and the lowest observed average delay, the model proves effective for real-time industrial automation scenarios. This research contributes to the development of scalable, energy-efficient, and responsive WSN architectures by leveraging deep learning for asset booking in edge-IoT environments.

Read More

Doi: https://doi.org/10.54216/JISIoT.180212

Vol. 18 Issue. 2 PP. 157-168, (2026)

Hybrid Neural Networks and Machine Learning for Detection of Diabetic Retinopathy

Waleed Khalid Al-zubaidi , Shokhan M. Al-Barzinji , Zaid Sami Mohsen , Omar Muthanna Khudhur

Diabetic retinopathy (DR) is one of the most common causes of blindness in the world, and early detection plays an important role in therapy. In this paper, we introduce a hybrid framework with the merger of sophisticated image processing techniques and deep learning models for automated DR detection from retinal fundus images. Information starts with an extensive preprocessing pipeline, which includes bilateral filtering for noise reduction, removal of artifacts, adaptive contrast enhancement and a precise segmentation in the U-Net architecture. To increase model robustness, random rotation augmentation was used to mimic different imaging positions. GLCM analysis is used to extract texture features capturing important lesion-related patterns, and deep features are extracted using a fine-tuned EfficientNet-B0 model. The hybrid feature set is then modelled by a Support Vector Machine (SVM) with the radial basis function kernel and optimized with cross-validation and hyperactive parameters. Experiments show our model can well solve the image heterogeneity problem and yields a high level of accuracy in diagnosis and grading corresponding severity requirements of DR stage.

Read More

Doi: https://doi.org/10.54216/JISIoT.180213

Vol. 18 Issue. 2 PP. 169-186, (2026)

Residual Graph Convolutional Networks for Improving Rumor Detection from Social Media Texts

Vanitha Siddheswaran , Prabahari Raju

Internet and social media have become significant platforms for sharing real-time information, with rumors significantly affecting billions of people's perceptions. Considerably, Rumor recognition is the most challenging task on social media platforms. Numerous Deep Learning (DL) models have been developed to extract linguistic characteristics from short-text tweets for rumor prediction. However, these models struggles to capture the intricate spatiotemporal relationships presenting tweet interactions. To address this issues, Bidirectional Encoder Representation from Transformers with Attention based Balanced Spatial-Temporal Graph Convolutional Networks (BERT-ABSTGCN) was used. This model incorporates Spatial-Temporal Attention Mechanism (STAM) and a Spatial-Temporal Convolution Module (STCM) to effectively model the spatiotemporal dependencies within in tweet interactions to enhance rumor detection.  However, it constitutes to high degradation problem due to convergence issues. A popular solution to these problems is Residual Learning (RL), which introduces identity mappings to speed up training and enhance gradient propagation. However, traditional RL can only be used for layer-wise task refining, which severely restricts its capacity to grasp more generalized dependencies. However, conventional RL is restricted to layer-wise refinement within a single task limiting its ability to capture broader dependencies. To address this, the proposed work is included with a Cross-Residual Learning (CRL) in BERT-ABSTGCN named BERT with Attention-based Balanced Spatial-Temporal Residual Graph Convolutional Networks (BERT-ABSTRGCN) for efficient rumor detection and stance classification. CRL of BERT-ABSTRGCN enable intuitive learning across multiple tasks like rumor detection and stance classification using cross-connections. CRL establishes direct connections between shallow and deep feature representations, mitigating the vanishing gradient issue.   The fitted residual mappings in the CRL will facilitate the BERT- BERT-ABSTRGCN with the provided information by using the short cut connections and lowers the probability of model degradation. BERT-ABSTRGCN effectively identifies rumor with different stances about specific social media posts, thereby preventing the spread of rumors. Experimental evaluations show that BERT-ABSTRGCN achieves 95.62% accuracy on the PHEME dataset and 90.15% on Mendeley’s COVID-19 rumor dataset, significantly surpassing traditional models.

Read More

Doi: https://doi.org/10.54216/JISIoT.180215

Vol. 18 Issue. 2 PP. 205-219, (2026)

Deep Fake Image Detection Using Ensemble Approach

Vijay Madaan , Raghad Tohmas Esfandiyar , Shahad Hussein Jasim , Oday Ali Hassen , Neha Sharma , Ansam A. Abdulhussein

This paper offers a comprehensive framework for real or fake image classification based on three classifiers: a Standard Convolutional Neural Network (CNN), an EfficientNetV2 model based on transfer learning, and a re-trained GAN discriminator to address the challenges in deepfake detection. The CNN, with four convolutional blocks and dropout regularization, offers computational efficiency (87.2% accuracy, 15 ms/image inference), while EfficientNetV2 utilizes pre-trained ImageNet weights to achieve state-of-the-art performance (94.7% ac-curacy, AUC: 0.98) using hierarchical feature extraction. The fine-tuned and adversarial-pretrained GAN discriminator demonstrates niche strength in the detection of synthetic artifacts (91% recall for GAN-generated fakes). Training used augmented sets (rotation, shifts, and shear) to increase the generalization boost, with loss optimization and early stopping (binary cross-entropy) controlled through validation. Normalized test set validation affirmed EfficientNetV2's capability at balancing recall (94%) with precision (95%), although the GAN discriminator recorded a lead in adversarial resilience. All the models blended, an ensemble model achieved maximum accuracy (96.1%), under complementarities. Computational baselines showed trade-offs EfficientNetV2 accu-racy vs. resource bias (2.5-hour training), the CNN edge-compatibility, and the GAN discriminator arti-fact-sensitive specialization. The work encourages hybrid architectures and ensemble approaches to balance out single-model vulnerabilities, offering a flexible toolkit for deepfake warfare while emphasizing the need for hardware-aware deployment techniques and ongoing adaptation to changing synthetic approaches.

Read More

Doi: https://doi.org/10.54216/JISIoT.180214

Vol. 18 Issue. 2 PP. 187-204, (2026)

Assessing Quality Attributes of Microservices in Hadoop and Spark Clusters: A Performance Benchmarking Approach in Dockerized and Non-Dockerized Architectures

Saad Hussein Abed Hamed , Mondher Frikha , Heni Bouhamed

The rapid expansion of big data has accelerated the adoption of distributed computing frame- works such as Apache Hadoop and Apache Spark, enabling efficient large-scale data processing. While Spark’s in-memory computation model significantly enhances performance compared to Hadoop’s traditional MapReduce, the deployment architecture—whether Dockerized or non- Dockerized—plays a crucial role in affecting performance, scalability, and resource management. This study evaluates the impact of containerized and non-containerized multi-node cluster architectures on the performance of Hadoop and Spark, utilizing standardized workloads such as WordCount and TeraSort. Key performance metrics, including execution time, throughput, and resource utilization, are analyzed across various configurations with parameter tuning. Beyond pure performance benchmarking, the study also assesses the quality attributes of microservices in big data environments, focusing on scalability, maintainability, fault tolerance, and resource efficiency. The comparative analysis between monolithic and microservice-based architectures highlights the advantages of modularity and independent scaling inherent to microservices. Experimental findings indicate that Spark outperforms Hadoop on small to medium-scale workloads, while Hadoop exhibits superior robustness for processing extremely large datasets. Dockerized deployments offer better resource isolation and management flexibility, whereas non-Dockerized setups demonstrate reduced overhead under certain configurations. These insights contribute to optimizing deployment strategies and architectural decisions for microservices-based big data processing frameworks.

Read More

Doi: https://doi.org/10.54216/JISIoT.180216

Vol. 18 Issue. 2 PP. 220-238, (2026)

Exploring the Relationship between Social Network Structures and Emotional Contagion using NLP and Network Science

Prapti Pandey , Vivek Shukla , Rohit Miri , Praveen Chouksey , Parul Dubey , Rohit Raja

Natural Language Processing (NLP) and Network Science were combined to study emotional contagion dynamics in social media networks. We simulated the diffusion of emotions through users on a synthetic interaction network using sentiment-labeled Twitter data and a graph-based model. We explored the relationship between graph metrics, including centrality and clustering coefficient, on emotion propagation and stability. The findings show that emotion intensity converges through the network and that both weak coupling of central nodes and moderate cluster structures dampen the spread of emotion. A community-level analysis reveals more alternative results, such as the fact that emotions differ in polarity between communities. Our work demonstrates a better understanding of how emotional behavior in online environments can be adjusted using semantic measures, which offer a means to characterize the relevance of information online and the interconnected relationships among emotionality.

Read More

Doi: https://doi.org/10.54216/JISIoT.180217

Vol. 18 Issue. 2 PP. 239-257, (2026)

A Two-Stage Hybrid AI Framework for Robust and Real-Time Driver Drowsiness Detection

Gowrishankar Shiva Shankara Chari , Jyothi Arcot Prashant

Driver drowsiness detection is an important aspect of intelligent transportation systems that aim to reduce fatigue-related accidents. The existing schemes based on threshold-based method, or deep-learning based models often found to be associated with issues in terms of flexibility, computational efficiency, or capacity for real-time performance. This paper presents a development of two-stage hybrid framework for driver drowsy detection, where the first stage utilizes a fuzzy-logic based approach applied to physiological measures, facial feature, head position, blink duration, and eye movements to produce lightweight and adaptive analyses of sleepiness in drivers. The second stage consists of a hybrid quantum-classical neural network (HQCNN), in which convolutional neural networks (CNN) extract spatial features whereas quantum fully connected (QFC) components apply entanglement-based transformations to improve both feature characterization and classification accuracy. The experimental result validates effectiveness of the proposed hybrid method with 94% accuracy, and better than traditional CNNs with real-time capability. The proposed framework is developed to achieve a balance between computational efficiency and classification/decision quality thereby making it suitable for driver monitoring in real-time application.

Read More

Doi: https://doi.org/10.54216/JISIoT.180218

Vol. 18 Issue. 2 PP. 258-272, (2026)

Advanced Deep Learning Model for Image Captioning Using Customized Vision Transformer with Global Optimization Algorithm

Suleman Alnatheer , Mohammed Altaf Ahmed

In the image-captioning field, the excellence of produced captions is vital for the effectual interaction of visual content. Image Captioning is the main task, which unites computer vision (CV) and natural language processing (NLP), where it goals to produce graphic legends for images. A dual-fold procedure depends on precise image perception and alters language understanding both semantically and syntactically. It is gradually challenging to stay up with the modern study and consequences in image captioning owing to the developing amount of knowledge accessible on the topic.  This analysis examines into deep learning (DL) to tackle the tasks challenged by individuals with graphic impairments, targeting to improve their visual insight via advanced technologies. By tradition, the visually impaired have trusted physical support and adaptive helps for understanding and navigating visual content. With the beginning of DL, there is a unique chance to develop this scenery. In this paper, we offer an Advanced Deep Learning Method for Image Captioning Based Using Customized Transformer with a Global Optimization Algorithm (ADLIC-CTGOA). The foremost aim of ADLIC-CTGOA model is to focus on the initiation of the effectual textual image captioning of an input image. Initially, the ADLIC-CTGOA method employs preprocessing phase to enhances both image and text data: images undergo noise removal and contrast enhancement to improve quality, while text is processed by removing numbers, converting to lowercase, and text vectorization. Next, the customized swin transformer is employed for feature extraction to capture fine-grained visual features from images. In addition, the BERT Transformer model is deployed for image captioning process. To enhance the performance of proposed technique, the chaotic Aquila optimization (CAO) technique was applied for parameter tuning for enhancing the performance. A wide sort of simulation studies are executed to ensure the improved performance of ADLIC-CTGOA system. The comparative result exploration reported the betterment of the ADLIC-CTGOA model on recent approaches in terms of different evaluation measures.

Read More

Doi: https://doi.org/10.54216/JISIoT.180219

Vol. 18 Issue. 2 PP. 273-289, (2026)

Improving Pedestrian Walkways for Individuals with Disabilities Using Heuristic Search Based Parameter Tuning with Deep Transfer Learning Models

Reem Alshenaifi

Blind and visually challenged people face the range of practical issues by undertaking outside travels as pedestrians. In the last decade, various beneficial devices is investigated and established to assist people with disabilities move independently and safely. Anomaly detection in pedestrian paths for visually impaired individuals, using remote sensing (RS), is crucial for improving pedestrian traffic flow and safety. Engineers and investigators can create efficient methods and tools with the effect of computer vision (CV) and machine learning (ML) to recognize anomalies and alleviate possible security hazards in pedestrian walkways. With recent progress in deep learning (DL) and ML fields, researchers have realised that the image recognition problem is supposed to be developed as classification problems. This paper proposes a Coati Optimization Algorithm-Based Parameter Tuning for Pedestrian Walkways with Transfer Learning Model (COAPT-PWTLM) technique. The main goal of COAPT-PWTLM technique is to provide automatic detection of pedestrian walkways for disability using advanced models. Initially, the median filtering (MF) is employed in the image pre-processing stage to eliminate the noise from an input image data. Furthermore, the SquezeNet1.1 model is utilized for feature extraction. For the classification process, the multi-layer autoencoder (MLAE) model is implemented. Finally, the modified update coati optimization algorithm (MUCOA) model adjusts the hyperparameter range of MLAE method optimally and results in improved classification performance. The experimental validation of the COAPT-PWTLM is verified on a benchmark image dataset and the outcomes are evaluated under dissimilar measures. The experimental outcome underlined the progress of the COAPT-PWTLM model over the existing models.

Read More

Doi: https://doi.org/10.54216/JISIoT.180220

Vol. 18 Issue. 2 PP. 290-303, (2026)

Integrating Artificial Intelligence Driven Computer Vision Framework for Enhanced Sign Language Recognition in Hearing and Speech-Impaired People

Inderjeet Kaur , P. Udayakumar , B. Arundhati , M. V. Rajesh , Naif Almakayeel , Elvir Akhmetshin

Sign language (SL) detection and classification for deaf persons is an essential application of machine learning (ML) and computer vision (CV) techniques. It covers emerging forms, which acquire SL implemented by entities and convert them into auditory or textual output. It is highly significant to understand that determining a correct and robust SL detection approach is a very challenging due to many tasks such as alterations in occlusions, and lighting states in hand actions and forms. Consequently, the CV and ML models is must for testing and training. A Hand gesture detection method discovers beneficial for hearing and speaking-impaired individuals by creating usage of convolutional neural network (CNN) and human-computer interface (HCI) for classifying the constant signals of SL. In this article, an Improved Fennec Fox Algorithm for Deep Learning-Based Sign Language Recognition in Hearing and Speaking Impaired People (IFFADL-SLRHSIP) technique is proposed. The presented IFFADL-SLRHSIP technique main intention is to provide effectual communication between deaf and dumb persons and normal persons utilizing CV and artificial intelligence techniques. In the IFFADL-SLRHSIP model, the enhanced SqueezeNet model is used to capture the intricate patterns and nuances of SL gestures. For detection of the SL classification process, the recurrent neural network (RNN) method is used. To optimize model performance, the improved fennec fox algorithm (IFFA) is applied for parameter tuning, enhancing the model's precision and efficiency. The experimental outputs of the IFFADL-SLRHSIP algorithm are legalized on the SL dataset. The simulation outcomes demonstrate the greater outcomes of the IFFADL-SLRHSIP approach in terms of diverse measures.

Read More

Doi: https://doi.org/10.54216/JISIoT.180221

Vol. 18 Issue. 2 PP. 304-314, (2026)

DNA Sequence Identification via Biologically Guided Feature Engineering and Hybrid ML–LSTM Networks

Marwa Mawfaq Mohamedsheet Al-Hatab , Maysaloon Abed Qasim , Sinan S. Mohammed Sheet

The promoter is the part of DNA, which is responsible of initiating RNA polymerase transcription of a gene. The location of this part of DNA is upstream the transcription start site. According to researches, the genetic promotors contribute majorly in many human diseases such as cancer, diabetes and Huntington’s disease. Therefore, promotor detection corresponds as a very crucial task. In this study, a hypered detection system, which integrates biologically developed feature extraction with traditional machine learning (ML) algorithms in addition to use Long Short-Term Memory (LSTM) network as a deep learning approach, has been proposed. The dataset used includes 106 nucleotide sequences. Results obtained from the study show that the perfect performance across all metrics (accuracy, sensitivity, specificity, precision, and F1-score) has been achieved when Naive Bayes used as a classifier, which reach 100% and AUC=1.The confusion matrix analyses and ROC curve confirm that LSTM model achieved 100% training accuracy and 84.38% test accuracy. The architecture and performance of the proposed model make it applicable in IoT-based intelligent genomic and healthcare systems, which enabling real-time and remote promoter detection.

Read More

Doi: https://doi.org/10.54216/JISIoT.180222

Vol. 18 Issue. 2 PP. 315-326, (2026)

Network-Aware Vehicle Detection and Tracking Using Hybrid Deep Learning and Simulated GPS in UAV Systems

Mohanad Ali Meteab Al-Obaidi , Shajan Mohammed Mahdi , Mustafa R. Al-Saadi , Yasmin Makki Mohialden , Saba Abdulbaqi Salman

The proposed study analyses a hybrid deep learning method to monitor a vehicle with drones with augmented simulated GPS data to increase awareness and localization accuracy. The system combines both the high detection speed of a real-time YOLOv5 with the high recognition accuracy of task-driven Faster R-CNN, which makes the performance of the system quite balanced, fully applicable to the application of aerial surveillance enforcement. The results will mimic realistic monitoring conditions since synthetic aerial scenes were produced in which vehicle density is randomly distributed and simulated geolocation data. Both models were applied in the processing of each scene and the resultant images were combined by a voting scheme. The hybrid system had an accuracy of 1.00, recalls 0.90, and F1 score of 0.95- it performed higher than the Faster R-CNN alone (F1 score:0.89) and higher in different conditions. The novelty of the proposed research is based on the fact that the invention combines the methods of dual-modality object detection (visual + spatial) and the use of a GPS base, which allows not only visual object detection but also object positioning. As opposed to the approaches previously used, based on single-modality models and without consideration of the data on geolocation, the framework achieves the integration of object recognition and useful mapping. The suggested system is lighttrack, economically feasible, and it is conveniently deployable to present scalable real-time traffic tracking, smart city planning, and aerial autonomy surveillance.

Read More

Doi: https://doi.org/10.54216/JISIoT.180223

Vol. 18 Issue. 2 PP. 327-340, (2026)

A Systematic Review on Classification Techniques of Microorganisms: Challenges and Recommendations – Towards Medical Intelligent Systems

Marwa T. Albayati , Mohd Ezanee Bin Rusli , Moamin A. Mahmoud , Aws A. Abdulsahib , Mohammed F. Alomari , Sallar S. Murad

Microorganisms are commonly found in our daily living environments and play a crucial role in environmental pollution control, disease prevention, and treatment, as well as food and drug production. To fully utilize the diverse functions of microorganisms, their analysis is essential using Intelligent Systems. Traditional analysis methods can be labor- intensive and time-consuming. As a result, image analysis using Intelligent Systems i.e. machine learning or deep learning have been introduced to improve efficiency. Deep learning networks algorithms such as CNN contain a stack of multi-layer, the first layer and the last are the input and output layers, between them are the hidden layers to extract and learn many features in images, recurrent network algorithms (RNN) combined with convolution neural network (CNN), these networks allow to process a series of images to extract the crucial information from images and also these algorithms help to minimize the size of images and reduce the redundancy in microrganisms images According to previous studies, these algorithms are the most used to classify the images of microorganisms. However, the classification of microorganism images presents several challenges these include the need for robust algorithms due to varying application contexts, the presence of insignificant features, along various analysis tasks that need to be addressed. The research summarizes significant advancements that tackle these challenges through deep learning and machine learning methods. Current obstacles, gaps in knowledge, unresolved issues, limitations, and difficulties in classification techniques are also discussed.

Read More

Doi: https://doi.org/10.54216/JISIoT.180224

Vol. 18 Issue. 2 PP. 341-360, (2026)

Deep Neural Network Graph with Reinforcement Learning for Test Case Prioritization

Shankar Ramakrishnan , E. K. Girisan

Recently, Deep learning (DL) models are increasingly used in Test Case Prioritization (TCP) tasks combining partial and imperfect test case (TC) information into accurate prediction models. Various DL algorithms have been created to improve TC failure prediction and prioritization in CI settings. Among them, Deep Reinforcement Prioritizer (DeepRP) model is developed using Deep Reinforcement Learning (DRL) and Deep Neural Network (DNN) for efficient TCP on huge test suites. But, the model's labelling task is interrupted early, creating difficulty in learning TC features for unlabeled training TCs due to limited resources. To solve this, Deep Graph Reinforcement Prioritizer (DeepGRP) is proposed in this paper to learn the TC features from unlabeled training data for efficient TCP in Regression Testing (RT). In this method, graph neuron stimulation attributes for TCs are created to retrieve the activation graph across DNN layers of DeepRP. The connectivity neuron link defines the activation graph. The proposed deep graph (DG) recognizes the DNN neurons as nodes and the adjacency matrix as the connectivity link among the nodes. Also, the message passing mechanism is applied to aggregate the structural information from the adjacency matrix with neighbouring node features to enhance TCP. By applying this mechanism, DeepGRP captures the high-order dependencies among neurons for efficient activation features which overcomes the traditional activation models and improves the TCP at large scale RT.  The DG model prioritizes TCs using Learning-to-Rank (L2R) which learns node attributes from TCs. This enables for better DNN testing efficiency by detecting vulnerabilities early and lower development time for efficient TCP and tackling the difficulty of learning TC characteristics for efficient TCP. Finally, the testing findings suggest that the DeepRP can improve the TCP for large TSs when compared to other common algorithms.

Read More

Doi: https://doi.org/10.54216/JISIoT.180225

Vol. 18 Issue. 2 PP. 361-374, (2026)

Emotion Recognition Using Deep Learning via Facial Expression

Santosh B. Dhekale , S. S. Nikam , D. K. Shedge

Human-computer interaction (HCI), artificial intelligence (AI), and HI are in high demand these days. In fields like marketing, client feedback analysis, security, and healthcare, facial expression- grounded emotion recognition becomes a pivotal tool for comprehending mortal feelings. Facial expressions like fear, disgust, surprise, anger, sadness, and happiness are pivotal pointers of emotional countries. Businesses can ameliorate client gests by relating these pointers and measuring client satisfaction with goods or services. The discovery of mortal feelings has been achieved with machine literacy algorithms like support vector machines and arbitrary timbers. The effectiveness of deep literacy models for emotion discovery has been validated by earlier studies that employed Convolutional Neural Networks (CNNs) to reliably classify feelings grounded on facial expressions. Likewise, recent developments in deep literacy, particularly the operation of Convolutional Neural Networks (CNNs), have significantly increased the delicacy of facial emotion recognition and interpretation from images and live camera aqueducts. In order to reuse face images with CNN models for real- time emotion recognition, our exploration attempts to produce an emotion recognition system using Python and OpenCV. The current study describes how to watch live videotape aqueducts for facial expressions to identify which of the seven linked feelings is most likely to do. This system provides emotional behavior in real time when needed.

Read More

Doi: https://doi.org/10.54216/JISIoT.180226

Vol. 18 Issue. 2 PP. 375-385, (2026)