Journal of Intelligent Systems and Internet of Things

Journal DOI

https://doi.org/10.54216/JISIoT

Submit Your Paper

2690-6791ISSN (Online) 2769-786XISSN (Print)

Quantifying the Impact of AI Integration in Software Development: An Empirical Analysis of Efficiency, Ethics, and Organizational Readiness

Sonia Ayachi Ghannouchi , Zaman Fahad Badday

This study empirically examines how artificial intelligence (AI) is changing the online software development ecosystem. Data from 30 types of software professionals in various roles is used to examine opportunities, challenges and ethical considerations, trends in AI-enhanced software development as well technological innovation research methods. Major findings show substantial increases in efficiency of development processes (39.3% decrease in development time) and the quality of the codes (53.3% less flaws/KLOC). However, organizations also face major challenges. For instance, there is a significant skill gap to bridge (severity rating 4.2/5) and expensive implementation costs to put into practice. This study provides a fact-based guide for organizations interested in integrating AI technologies into their software development procedures. The paper also outlines practical inputs that must be made by software practitioners.    

Read More

Doi: https://doi.org/10.54216/JISIoT.180101

Vol. 18 Issue. 1 PP. 01-11, (2026)

ChainGuard 6G+: A Secure and Private Architecture for Wireless Communication Using Federated Learning and Blockchain in IoT Networks

Saleh Ali Alomari

The advent of 6G wireless communication systems and the widespread proliferation of Internet of Things devices have necessitated advanced frameworks for secure, private, and intelligent data management. ChainGuard 6G+, a novel privacy-preserving architecture, which integrates Federated Learning with Blockchain, is introduced in this paper to offer data security, integrity, and anomaly detection features for IoT-enabled 6G networks. FL facilitates decentralized model training across distributed edge nodes, thus keeping local data on-device with model updates shared. This ensures user privacy, particularly valuable in sensitive applications such as healthcare, financial services, and industrial IoT networks. For further strengthening privacy, Differential Privacy is applied by introducing statistical noise into model updates, masking individual contributions without degrading learning accuracy. Blockchain is incorporated as an immutable ledger to record model parameters and training securely, enabling traceability and tamper-evident model provenance. Role-based access control for secure data and model access, end-to-end encryption, and secure transmission protocols are included in the architecture. Experimental results demonstrate the efficacy of the system under consideration using a 6G Network Slice Security Attack Detection Dataset, with synthetic and real attacks on various network slices. Performance evaluation reveals that ChainGuard 6G+ not only ensures data privacy but also has excellent detection rates against DoS, DDoS, and spoofing attacks. The proposed framework achieves an overall attack detection accuracy of 99.1%, implemented and experimented using Python, revealing its promise as a secure, scalable solution for future wireless secure communication networks.    

Read More

Doi: https://doi.org/10.54216/JISIoT.180102

Vol. 18 Issue. 1 PP. 12-33, (2026)

Enhancing Intrusion Detection System Transparency Using SHAP-Driven Support Vector Machine Tuned by Harris Hawks Optimization

Noor Flayyih Hasan

Due to the increasing prevalence of network attacks, maintaining network security has become significantly more challenging. An Intrusion Detection System (IDS) is a critical tool for addressing security vulnerabilities. IDSs play a vital role in monitoring network traffic and identifying malicious activities. However, two major challenges hinder IDS performance: data imbalance, which weakens the detection of minority class attacks, and overfitting in traditional classifiers such as Support Vector Machines (SVM). This study proposes a novel and transparent IDS framework that integrates several advanced techniques: Variational Autoencoder (VAE) for data augmentation, Mutual Information-based feature selection, Harris Hawks Optimization (HHO) for hyperparameter tuning of the SVM, and SHAP (SHapley Additive exPlanations) for interpretability. VAE is utilized to generate synthetic instances for minority classes, effectively addressing class imbalance. Feature selection is employed to reduce dimensionality and enhance generalization performance. The HHO algorithm is used to adaptively tune the hyperparameters of the SVM, thereby optimizing classification accuracy while mitigating overfitting. Finally, SHAP values are employed to interpret the SVM’s decisions, enhancing the transparency and trustworthiness of the system. Experimental evaluations conducted on two benchmark IDS datasets, UNSW-NB15 and NSL-KDD, demonstrate that the proposed VAE-HHO-SVM framework outperforms existing models in terms of accuracy, robustness, and interpretability. The results confirm the effectiveness of combining optimization, explainable AI, and data balancing strategies in modern IDS development. Specifically, the proposed method achieves an accuracy of 98.42% on the NSL-KDD dataset and 97.45% on the UNSW-NB15 dataset—an improvement of 3.17% over other methods.

Read More

Doi: https://doi.org/10.54216/JISIoT.180103

Vol. 18 Issue. 1 PP. 34-47, (2026)

Blade Server Attack Detection and Mitigation Framework in Cloud Computing Using SSGRU and GGSSO

Waleed Kh. Hussein , Ghaith J. Mohammed , Ahmed Salih Al-Obaidi , Massila Kamalrudin , Mustafa Musa

Malicious activities that seek to disrupt cloud communication are cybersecurity threats. Nevertheless, none of the existing works focused on detecting the attacks that happened in the Blade Server (BS) in the cloud. Therefore, this paper proposes an efficient Intrusion Detection System (IDS) framework for BS in the cloud by utilizing Kerberos-based Exponential Mestre-Brainstrass Curve Cryptography (KEMBCC) and Sechsoftwave and Sparsele-centric Gated Recurrent Unit (SSGRU). Primarily, the cloud users are registered into the network. Then, the incoming data are encrypted. Here, to balance the incoming loads, BS is used. To detect attacks in BS, IDS is implemented. Initially, the data are preprocessed. Further, the big data are handled in the IDS. Afterward, the features are extracted and optimal features are chosen from it. Thereafter, to classify the attack and normal BS, the SSGRU classifier is used. After that, by generating a Sankey diagram, the attacked and non-attacked blades in the BS are differentiated. Next, the attacked blades are isolated, whereas the non-attacked blades are further used for load balancing on the cloud. According to the analysis results, this model performed superior to the other models by attaining an accuracy of 99.43%.

Read More

Doi: https://doi.org/10.54216/JISIoT.180104

Vol. 18 Issue. 1 PP. 48-63, (2026)

Optimized Deep Learning Models for Forecasting Evaporation in Almaty Using Gray Wolf Optimization

Ruaa Azzah Suhail , Osama Salim Hameed , El-Sayed M. El-Kenawy , Marwa M. Eid

The reliable estimation of evaporation is essential for proper water resource planning, particularly in scenarios governed by climatic variability. This work proposes the application of advanced deep learning methods—namely Long Short-Term Memory (LSTM), Bidirectional LSTM (BiLSTM), and Gated Recurrent Unit (GRU)—optimized by the Gray Wolf Optimization (GWO) algorithm in predicting monthly evaporation values over Almaty, Kazakhstan. Furthermore, the models were optimized for best performance through the adjustment of key hyperparameters such as the number of hidden units, dropout rates, and learning rates. Among candidate models for evaluation, the optimal model with smallest MSE (0.6162) and maximum value of R-squared (0.9335) was LSTM-GWO, indicating strong correlation with actual values. Performance measures such as RMSE, MAE, and MAPE strongly indicated the improved generalization strength of LSTM-GWO compared to BiLSTM and GRU. Forecasts for 2023 indicated seasonal patterns persistently expressed as maximum evaporation during summer seasons. The results detail the potential of deep learning algorithms tuned to improve the precision of hydrological forecasting specifically for semi-arid areas.

Read More

Doi: https://doi.org/10.54216/JISIoT.180105

Vol. 18 Issue. 1 PP. 64-79, (2026)

Keystroke Dynamics System for User Authentication Using SVM Classifier

Rasha Khalid Ibrahim , Mays M. Hoobi

As people increasingly rely on computers to store sensitive information and interact with various technologies, the need for low-cost, effective security measures has become more critical than ever. One such method is keystroke dynamics, which analyzes a person’s typing rhythm on digital devices. This behavioral biometric approach enhances the security and reliability of user authentication systems and contributes to improved cybersecurity. This study aims to reduce authentication risks by encouraging the adoption of keystroke-based verification methods. The research uses a fixed-text password dataset (.tie5Roanl), collected from 51 users who typed the password over eight sessions conducted on alternating days, capturing variations in mood and typing behavior. Seven models were developed, each following a structured seven-phase process. The first phase involved loading the CMU Keystroke Dynamics Benchmark dataset. The second focused on data preprocessing. In the third phase, new keystroke features were engineered from the original dataset. The fourth phase involved feature selection across various types: unigraph (Hold), digraph (Down-Down, Down-Up, Up-Down, Up-Up), trigraph (Hold-Tri), and their combinations. Training and testing were conducted in the fifth and sixth phases using a Support Vector Machine (SVM) classifier, leveraging keystroke patterns for behavioral biometric identification. The final phase focused on evaluating the models. Each model was tested under two scenarios: one where only the first user is treated as the authorized user, and another where the first three users are considered authorized. Each scenario was further divided into two cases based on preprocessing conditions. The models were assessed using multiple performance metrics, including Accuracy, F1-Score, Recall, Precision, ROC-AUC, and Equal Error Rate (EER). The highest achieved results were Accuracy of 99.35%, F1-Score of 94.2%, Recall of 91.8%, Precision of 98.8%, ROC-AUC of 99.56%, and a minimum EER of 0.02. These outcomes demonstrate the effectiveness of the proposed approach in enhancing authentication reliability using keystroke dynamics.

Read More

Doi: https://doi.org/10.54216/JISIoT.180106

Vol. 18 Issue. 1 PP. 80-102, (2026)

Deep Learning Techniques For Image Splicing Detection: A Systematic Review

Mohammed S. Khazaal , Mohamed Elleuch , Monji kherallah , Faiza Charfi

Currently, images stand for a highly common form of communication, whether through teleconferencing, mobile communication or social media. The identification of counterfeit images is intrinsic because it is crucial that the images used for communication be genuine and original. Images are fabricated referring to the fact that it is challenging to set the difference between a tampered image and the real image. This refers notably to the myriad technological, moral, and judicial implications connected with advanced image editing software. The majority of handcrafted traits are used in traditional approaches for detecting image counterfeiting. The problem with many of the image tampering detection methods now in use resides in the fact that they are confined to identifying particular types of alteration by looking for particular features in the images. Image tampering is currently recognized through deep learning techniques. These methods proved to be promising and worthwhile as they perform better than traditional ones since they can extract complex components from images. As far as this research paper is concerned, we provide a thorough review of deep learning-based methods for detecting splicing images, along with the pertinent results of our survey in the form of findings and analysis.

Read More

Doi: https://doi.org/10.54216/JISIoT.180107

Vol. 18 Issue. 1 PP. 103-113, (2026)

Optimizing Neural Network Architectures with TensorFlow and Keras for Scalable Deep Learning

Muna Al-Saadi , Bushra Al-Saadi , Dheyauldeen Ahmed Farhan , Oday Ali Hassen

Deep studying architectures face fundamental demanding situations in balancing overall performance optimization, computational scalability, and operational interpretability. Current strategies show off an essential fragmentation: neural architecture search (NAS) techniques perform independently of interpretability requirements, while scalability answers remain detached from structure optimization pipelines. This disconnect hinders the improvement of a unified workflow from architecture layout to interpretable deployment. We endorse DeepOptiFrame, a TensorFlow/Keras-primarily based Python framework that combines three middle capabilities: (1) superior optimization algorithms (BOHB, Hyperband) with useful resource-restrained multi-objective search, (2) distributed training acceleration across GPU/GPU clusters via Horovod integration and blended-precision strategies, and (3) GPU-increased interpretability gear (SHAP, LIME) incorporated without delay into the education pipeline. Our framework demonstrates large experimental improvements: a 15-20% accuracy growth at the CIFAR-a hundred and ImageNet benchmarks compared to today's baselines, a 65% education speedup whilst scaled to eight GPUs with close to-linear performance, and a 30% development in interpretability reliability, as measured via the Mean Confidence Decrease metric. This implementation additionally reduces reminiscence intake via forty% throughout gradient checkpoints even as keeping numerical balance. These advances establish a new paradigm for coherent deep learning development, simultaneously improving overall performance, scalability, and transparency inside unified workflow surroundings.

Read More

Doi: https://doi.org/10.54216/JISIoT.180108

Vol. 18 Issue. 1 PP. 114-125, (2026)

IoT-Enabled Reversible Watermarking of Medical Images Using PCA and Hash-Based Signatures for Secure Smart Healthcare

Pradeep Kumar Tripathi , Manoj Varshney , Aditi Sharma

The rise of IoT in smart healthcare systems necessitates secure and efficient methods to protect sensitive medical imaging data transmitted across interconnected devices. This research introduces a novel IoT-enabled reversible watermarking technique using Principal Component Analysis (PCA) and Hash-Based Signatures (HBS) to ensure both data integrity and diagnostic quality. The method supports secure embedding of watermarks into medical images captured and transmitted by IoT devices such as wearable scanners, remote diagnostic units, and edge sensors. By leveraging PCA for minimal distortion and reversible embedding, and HBS for robust tamper detection, the system ensures full restoration of original images post-verification. Discrete Wavelet Transform (DWT) further optimizes the compression and transformation for real-time IoT environments. The proposed approach demonstrates high imperceptibility (high PSNR), robust tamper detection (using SHA-256 and SHA-512), and full reversibility, making it ideal for real-time transmission of medical data over IoT-based healthcare networks.

Read More

Doi: https://doi.org/10.54216/JISIoT.180109

Vol. 18 Issue. 1 PP. 126-139, (2026)

A Distributed İntrusion Detection Using Long Short-Term Memory-Gradient Repeating Unit and Enhanced Density Peak Clustering for Real-Time Cyber Threat Detection

Wisam Ali Hussein Salman

Due to the huge number of devices that connect to Internet of Things (IoT) networks, these networks have become the main nerve of the organizations that use them due to the large services that the networks provide to companies. In recent years, the number of attacks targeting IoT networks to shut down or violate data privacy has increased, so system developers must build strong protection systems to keep those networks secure. Intrusion detection systems (IDS) and intrusion prevention systems (IPS) are one of the most promising protection systems in securing these networks, but they suffer from several challenges, including high false positive alarms (FPA) and false negative alarms (FNA), in addition to the difficulty of controlling the long-time chains of incoming and outgoing traffic in IoT networks. This paper presents a distributed intrusion detection system (DIDS) based on the use of deep learning algorithms, specifically the enhanced long short-term memory (LSTM) algorithm with the gradient repeating unit (GRU) algorithm, as well as the use of a modern dataset collected from real network data called CICIOT2023. To adjust the threshold and achieve a balanced approach to the detection of anomalies, a hybrid model of the Enhanced Peak Density (DPC) aggregation algorithm with ROC curve analysis was used. The proposed work's main innovation is the combination of top-k feature selection with a hybrid LSTM-GRU architecture optimized for imbalanced datasets using focal loss, SMOTE, and dynamic class weighting. As a result, the intrusion detection pipeline is strong and effective. To evaluate the functioning of the system, standard performance metrics such as AUC-ROC, accuracy, F1-score, and recall were used, as the proposed system proved to be a powerful solution to prevent complex attacks targeting IoT networks as well as the possibility of detecting rare and modern attacks. The proposed model achieved promising results with accurate results reaching (96.0%) and a false negative rate (FNR) of 0.049% and a false positive rate (FPR) of 0.014%.

Read More

Doi: https://doi.org/10.54216/JISIoT.180110

Vol. 18 Issue. 1 PP. 140-149, (2026)

Arabic Fake News Detection Techniques: A Review

Maysoon Ahmed Abbas , Dhafar Hamed Abd , Mondher Frikha , Adel M. Alimi

People are efficient on websites and social media platforms for news and updates as their popularity has grown. Even official media outlets to publish news use social media networks. However, due to the massive volume of user-generated material, verifying the veracity of the presented information is necessary. To handle the large volume of posts being made, this procedure should be implemented automatically and effectively. Fake news detection (FND) estimates the chance that a certain news story (news report, editorial, expose, and the like) is purposefully misleading. Over the past ten years, there has been an increase in interest in Arabic FND, and several detection techniques have shown some promise in identifying fake news across various datasets. This paper provides an overview of the fake news definition, consequences, detection strategies, and datasets that are utilized for detecting Arabic fake news. The design of Arabic FND systems is mainly based on two methods. The first one uses machine learning (ML) methods that rely on manually produced statistical data extracted from the text and used as a feature to distinguish between real and fake news. In the second strategy, “end-to-end” systems for detection are created using deep learning (DL) approaches. The investigation conducted in this paper may help researchers understand the advantages and uses of Arabic FND systems to develop more efficient algorithms in this field.

Read More

Doi: https://doi.org/10.54216/JISIoT.180111

Vol. 18 Issue. 1 PP. 150-168, (2026)

Trustworthy and Interpretable AI in IoT-Based Medical Systems: A Review and Framework for CoT-XAI Integration

Faisal Binsar , Sasmoko

The use of Artificial Intelligence (AI) in medical diagnosis has rapidly evolved with the adoption of large language models and explainability techniques. This study investigates the intersection of Chain-of-Thought (CoT) reasoning and Explainable AI (XAI) in the development of trustworthy diagnostic systems, particularly within Internet of Things (IoT)-enabled healthcare environments. A systematic review of 106 Scopus-indexed publications (2016–2025) was conducted, supported by topic modeling (LDA) and keyword co-occurrence network analysis to identify dominant research themes and gaps. Findings reveal that while CoT and XAI are actively studied, their integration within real-time, distributed, and resource-constrained medical systems remains limited. Most research emphasizes either performance or interpretability in isolation, with minimal efforts to embed step-wise reasoning into deployable clinical AI pipelines. Moreover, few studies address how CoT can function effectively in edge computing or federated learning scenarios common to IoT infrastructures. To address this gap, we propose a multi-layered conceptual framework that integrates CoT reasoning, machine learning predictors, XAI methods, and IoT deployment models. This framework reflects the shift toward user-centric, transparent, and adaptive AI solutions in smart healthcare. It provides a structured path from multimodal data ingestion to clinically interpretable and real-time decision support. This study contributes a novel perspective on reasoning-driven explainability and offers design guidance for future development of interpretable, scalable, and deployable AI systems in medical applications.

Read More

Doi: https://doi.org/10.54216/JISIoT.180112

Vol. 18 Issue. 1 PP. 169-184, (2026)

Urban Planning Based Sustainable Public Healthcare System using Machine Learning Algorithms

V. Rajathi , Pritee Parwekar , V. Anantha Lakshmi , M. Syed Rabiya , M. Banu Priya , V. Devi

Growing use of a wide range of Internet of Medical Things (IoMT) devices and apps makes smart health an increasingly vulnerable area. One popular method for creating smart city solutions that benefit vital infrastructures over time, such smart healthcare, is IoMT. Because Bluetooth technology is flexible and uses few resources, it is used for short-range communication by many IoMT devices in smart cities. This research proposes novel technique in urban planning in smart public healthcare system utilizing ML algorithms. The smart healthcare system is developed based on secure honeynet cloud IoT model. Here the input smart healthcare-based health monitoring data is collected and processed for missing value removal and noise removal. Then this data classified and optimized using recurrent Bi-LSTM temporal Gaussian model with whale swarm particle colony optimization. Experimental analysis is carried out in terms of detection accuracy, precision, data integrity, throughput, recall, latency. proposed technique obtained 96% of Detection    accuracy, 97% of Precision, 95% of Throughput, 88% of RECALL, 94% of LATENCY.

Read More

Doi: https://doi.org/10.54216/JISIoT.180113

Vol. 18 Issue. 1 PP. 185-193, (2026)

An Adaptive Mutation-Aware Test Case Ordering Framework Using Deep Learning and Quantum-Behaved Multi-Objective PSO

S. Sowmyadevi , Anna Alphy

In regression testing, rapidly identifying defects is crucial for maintaining software quality amid frequent code changes. Traditional test case ordering methods, despite extensive research, often overlook the subtle but important relationship between test executions and mutations introduced during code modifications. This paper presents an adaptive mutation-aware test case ordering framework that integrates predictive modeling with swarm-based multi-objective optimization to address this gap. The approach begins by transforming test cases into enriched feature vectors, incorporating mutation coverage, historical performance, execution cost, and statement-level weighting. A supervised deep learning model is employed to predict the likelihood of each test case uncovering seeded defects. These predictions are subsequently fed into a Quantum-Behaved Particle Swarm Optimization (QPSO) engine, which generates an optimal execution sequence by jointly optimizing fault detection, execution cost, reuse potential, and coverage diversity. The proposed framework is demonstrated using a simple Java program and rigorously validated on real-world projects from the De-fects4J benchmark. Experimental results consistently show improvements in APFD, mutation scores, and execution efficiency, confirming the feasibility and scalability of the proposed system.

Read More

Doi: https://doi.org/10.54216/JISIoT.180114

Vol. 18 Issue. 1 PP. 194-206, (2026)

Satellite Imaging Based Risk Management in Cloud IoT Network Using Machine Learning Techniques

Jyotsnarani Tripathy , T. Krishna Murthy , S. Manjula , Sukanya Ledalla , Alla Rajendra , P. Lakshmi Harika , K Boopathy

The consistent improvement of remote sensing (RS) technology has resulted in an easy access to a large volume of satellite imagery. There is a need for effective and scalable solutions for widening the application of RS in different fields and making it work efficiently in practical situations. This research propose novel technique in satellite image gathering and cloud IoT network risk management using machine-learning model. Here the cloud IoT network has been used in satellite image collection and this network security analysis has been carried out using secure trust based cryptographic blockchain model. Then this collected image has been classified using convolutional bayes fuzzy markov perceptron basis function model. Experimental analysis has been carried out in terms of accuracy, QoS, recall, latency, scalability. Proposed model attained accuracy of 97%, QoS of 94%, LATENCY of 96%, Scalability of 95%, RECALL of 93%. These results assist decision-makers, planners, and scientists studying remote sensing select an appropriate image classification system for tracking a dynamic, fragmented, and varied landscape.

Read More

Doi: https://doi.org/10.54216/JISIoT.180115

Vol. 18 Issue. 1 PP. 207-217, (2026)

Diverse Geographical Region Analysis Based on Deforestation Rate Using Remote Sensing Image and Machine Learning Techniques

Abhilash S. Nath , Manu Gupta , J. Sirisha Devi , A Babisha , D. Venkata Ravi Kumar , B. Rama Subba Reddy

With direct implications for the regional climate, biogeochemistry, hydrology, and biodiversity, land cover change has been identified as one of the top priorities for the development of sustainable management plans. Among the primary causes of global warming are deforestation and forest fragmentation, which have profound effects on biodiversity preservation and ecosystem functioning. Machine learning techniques, like those employed in computer vision, have become widely used, making it possible to segment satellite images semantically to distinguish between areas that are forested and those that are not. This study presents a novel method for segmenting and classifying UAV images to detect deforestation using machine-learning models. In this case, noise reduction as well as normalisation is applied to input, which consists of UAV-based forest region photos. Semantic U-convolutional regressive neural network combined with deep radial quantile temporal neural network was then used to segment and classify this image. The suggested model's simulation analysis is assessed based on several metrics, including F-1 score, normalized coefficient ratio, average precision, AUC, and detection accuracy. proposed method yielded 97% detection  accuracy, 93% normalized coefficient ratio, 91% AUC, F-1 score of 94% and 95% AVERAGE PRECISION.

Read More

Doi: https://doi.org/10.54216/JISIoT.180116

Vol. 18 Issue. 1 PP. 218-226, (2026)

Edge Cloud IoT Model Based Marine Life Analysis Using Machine Learning Algorithms

Gagan Kumar Koduru , S. Kalaimagal , M. Srilakshmi Preethi , G. L. Narasamba Vanguri , Shivanadhuni Spandana , M. Syed Rabiya , M. Rajesh

The amount of marine data is such that it is pointless, and at times infeasible, to attempt training deep learning models on personal workstations. In this work, we present the advantages of cloud based distributed learning in training of deep learning (DL) model and management of big data. Moreover, large volumes of marine big data are classically through wire networks, which are costly, if at all deployable, to maintain. This research propose novel technique in marine life analysis based on remote sensing image using edge cloud IoT model and machine learning algorithms. Here the edge cloud IoT model has been used for collecting remote sensing image in marine life analysis. This remote sensing image has been processed for noise removal as well as normalization. Then this image is feature extracted as well as classified utilizing principal Gaussian convolutional fuzzy encoder with Bayesian reinforcement Markova algorithm. Experimental analysis has been carried out in terms of classification accuracy, average precision, recall, F1 score, AUC for various marine life dataset. proposed technique obtained 97% Classification   accuracy, 95% Average precision, 93% Recall, 88% AUC, 94% F1 SCORE.

Read More

Doi: https://doi.org/10.54216/JISIoT.180117

Vol. 18 Issue. 1 PP. 227-237, (2026)

Cloud IoT with Remote Sensing Data Segmentation and Classification Using Deep Learning Model for Sustainable Agriculture

T. Shanmugapriya , RM. Rani , Gaddam Ravindra Babu , T. Srinivasulu , S. Saranya , S. Gopinath , M. Rajesh

Sustainable Development Goals of United Nations are focused on enhancing agricultural production that has the potential to be transformational at the local as well as the global level. The available technologies in agriculture management that are based on Internet of Things (IoT) encourage sustainable production of more food by farmers, which contributes significantly to the achievement of these SDGs. The aim of this research is to propose novel technique in sustainable agriculture field analysis based on cloud IoT model with remote sensing and deep learning model. Here the cloud IoT model is used in agriculture field based remote sensing data analysis. This image has been segmented using watershed K-means temporal neural network (WKMTNN) and classification is carried out using deep quantile regressive Boltzmann machine (DQRBM). The experimental analysis has been carried out in terms of random accuracy, average precision, sensitivity, specificity for various agriculture field dataset. Proposed model attained average precision 96%, sensitivity 93%, random   accuracy 98%, and Specificity 95%.  These results highlight the superiority of the moisture estimation framework against their regression-based counterparts.

Read More

Doi: https://doi.org/10.54216/JISIoT.180118

Vol. 18 Issue. 1 PP. 238-249, (2026)

Climate Change Prediction in Urban Environment Using UAV Imaging Based on Cloud IoT and Deep Learning Techniques

M. Prema Kumar , P. Chinnasamy , B. Bala Abirami , Juvvala Sailaja , S. Bhuvana , Sai Krishna Vunnam

Advancements in Unmanned Aerial Vehicles (UAVs), popularly identified as drones, offer unprecedented opportunities to improve various applications of Extensive Internet of Things (IoT). In this framework, Deep Learning (DL) techniques are considered a practical alternative for improving the real-time obstacle detection and avoidance performance of fully autonomous UAVs. This research propose novel technique in urban environment climate change detection utilizing UAV image based on cloud IoT with deep learning model. Here the UAV images has been collected through cloud IoT module and prepared for dataset. This dataset with UAV images has been processed for filtering and contour reduction by normalization. Then processed image features are extracted utilizing graph cut fuzzy convolutional ResNet attention neural network with moath firefly sparrow colony optimization model. The simulation results has been analyzed for various UAV dataset in terms of training accuracy, average precision, recall, QoS, scalability. Proposed technique Average precision of 97%, QOS of 92%, SCALABILITY of 96%, training accuracy of 98%, RECALL of 95%.

Read More

Doi: https://doi.org/10.54216/JISIoT.180119

Vol. 18 Issue. 1 PP. 250-259, (2026)

Design and Construction of the Word Embedding Model for Automated Bug Detection Using Deep Learning Techniques

Khasimbee Shaik , K .V. Satyanarayana , Tirimula Rao Benala

Software quality assurance teams can increase productivity and efficiency by expediting the issue-fixing process through automatic localization of bug files. Although source code and bug reports provide valuable semantic information, current bug localization techniques typically underuse it. Numerous deep learning and word embedding models have been developed over time. The word-embedding model used to represent bug reports and the deep learning model used for categorization determine how effective those methods are. Aim of this research is to construct word-embedding method, which has been automated for bug detection using deep learning techniques. Here the input data has been collected as software design based monitored data and processed. Then this data has been analyzed using Bi-LSTM voting vector word embedding model and the feature classification is carried out using convolutional naïve bays attention perceptron neural network in bug detection model. The experimental analysis is carried out in terms of training accuracy, precision, Mean square error, F-1 score, and recall. Furthermore, cross-training datasets from the same and distinct domains are used to gauge how effective the suggested approach is. For datasets in the same domain, suggested system obtains a good high accuracy rate; for datasets in separate domains, it achieves a poor accuracy rate.

Read More

Doi: https://doi.org/10.54216/JISIoT.180120

Vol. 18 Issue. 1 PP. 260-273, (2026)

Feature Weight-Based Optimization in Software Development Model Using Meta Heuristic Machine-Learning Algorithms

N. Durga Devi , Tirimula Rao Benala

System users are increasingly interested in software correctness and efficiency checks prior to usage. Programmers in the twenty-first century are therefore making a conscious effort to create software that is more accurate, more efficient, and less prone to bugs. A software development model utilizing metaheuristic machine learning algorithms involves using metaheuristic optimization techniques to enhance various aspects of the software development lifecycle, such as optimizing machine learning models, hyperparameters, and even software architecture. This research propose novel technique in feature weight model based optimization in software development utilizing Meta heuristic ML method. Here the feature weight and feature selection is carried out for software model using support additive regression Laplacian score perceptron neural network. Then the software model parameter optimization is carried out using ant binary swarm component encoder optimization method. Simulation analysis is carried out in terms of training accuracy, MAR (Mean absolute residual), Mean balanced relative error (MBRE), F-measure.

Read More

Doi: https://doi.org/10.54216/JISIoT.180121

Vol. 18 Issue. 1 PP. 274-287, (2026)

Machine Learning Model Based Urban Temperature Analysis with Fuzzy Reinforcement Neural Network

L. Pallavi , Gattu Shravani , J. Sirisha Devi , Bandaru Satya Lakshmi , M. Pushpalatha , S. Gopinath , M. Rajesh

Temperature increases in metropolitan areas are referred to as urban heat island (UHI) effect. In recent decades, urbanization as well as dramatic increase in population of cities have exacerbated the impact of UHI. The uneven development and growth of the metropolis will lead to an uneven rate of temperature growth in the corresponding area. This work proposes a new machine learning approach based on temperature pattern analysis to determine the rate of deforestation, representing the diversity of geographical regions. The proposed model collect temperature pattern based deforestation data as well as processed for noise removal and normalization. Then this data features has been extracted as well as classified utilizing kernel principal fuzzy reinforcement NN with variational Gaussian encoder markov model. Experimental analysis is carried out in terms of random accuracy, mean precision, AUC, normalized co-efficient, F1 score. Proposed method mean precision was 94%, normalized co-efficient was 97%, AUC was 95%, random accuracy 98%, F1-score 93%.  The most important land use categories causing LST increases were determined by analyzing the landscape composition at the class level.

Read More

Doi: https://doi.org/10.54216/JISIoT.180122

Vol. 18 Issue. 1 PP. 288-297, (2026)

Distributed Ledger Technology-Enhanced 6G Wireless Communication: Overcoming Trust, Privacy, And Scalability Challenges

R. Sivasankari , S. Amsavalli , Kamarunnisha H. , Vetripriya M. , Tamilselvi S.

The transition from 5G to 6G wireless communication systems introduces new challenges, including scalability, privacy, and security. DLT (Distributed Ledger Technology) technology, with its decentralized and secure framework, offers a promising solution to address these issues in a 6G context. In a 6G environment, DLT can facilitate decentralized management, secure authentication, and trusted data exchanges. By leveraging DLT's distributed ledger system, it can support device identity verification, spectrum allocation, and secure data sharing across nodes, creating a trustworthy communication ecosystem. DLT and 6G integration enables efficient spectrum management, where smart contracts automate resource allocation, reducing bottlenecks and improving resource efficiency. Moreover, the decentralized nature of DLT enhances privacy and security by providing an authentication mechanism that works without central authority. This is crucial, as 6G will involve a vast number of connected devices. This research aims to explore the role of DLT in improving the security and scalability of 6G networks, investigate spectrum management techniques, and evaluate decentralized device authentication and trust mechanisms. Additionally, challenges such as latency, scalability, and DLT integration in 6G are examined. DLT's decentralized nature aids in network security and robustness, mitigating vulnerabilities by distributing control across nodes. It also streamlines resource allocation and device authentication, improving privacy. DLT enables users to manage access rights through decentralized mechanisms, fostering trust and compliance with privacy regulations. However, issues like latency due to transaction validation and the need for advanced techniques like sharding are challenges that must be addressed to optimize DLT for 6G applications.

Read More

Doi: https://doi.org/10.54216/JISIoT.180123

Vol. 18 Issue. 1 PP. 298-308, (2026)