This study empirically examines how artificial intelligence (AI) is changing the online software development ecosystem. Data from 30 types of software professionals in various roles is used to examine opportunities, challenges and ethical considerations, trends in AI-enhanced software development as well technological innovation research methods. Major findings show substantial increases in efficiency of development processes (39.3% decrease in development time) and the quality of the codes (53.3% less flaws/KLOC). However, organizations also face major challenges. For instance, there is a significant skill gap to bridge (severity rating 4.2/5) and expensive implementation costs to put into practice. This study provides a fact-based guide for organizations interested in integrating AI technologies into their software development procedures. The paper also outlines practical inputs that must be made by software practitioners.
Read MoreDoi: https://doi.org/10.54216/JISIoT.180101
Vol. 18 Issue. 1 PP. 01-11, (2026)
The advent of 6G wireless communication systems and the widespread proliferation of Internet of Things devices have necessitated advanced frameworks for secure, private, and intelligent data management. ChainGuard 6G+, a novel privacy-preserving architecture, which integrates Federated Learning with Blockchain, is introduced in this paper to offer data security, integrity, and anomaly detection features for IoT-enabled 6G networks. FL facilitates decentralized model training across distributed edge nodes, thus keeping local data on-device with model updates shared. This ensures user privacy, particularly valuable in sensitive applications such as healthcare, financial services, and industrial IoT networks. For further strengthening privacy, Differential Privacy is applied by introducing statistical noise into model updates, masking individual contributions without degrading learning accuracy. Blockchain is incorporated as an immutable ledger to record model parameters and training securely, enabling traceability and tamper-evident model provenance. Role-based access control for secure data and model access, end-to-end encryption, and secure transmission protocols are included in the architecture. Experimental results demonstrate the efficacy of the system under consideration using a 6G Network Slice Security Attack Detection Dataset, with synthetic and real attacks on various network slices. Performance evaluation reveals that ChainGuard 6G+ not only ensures data privacy but also has excellent detection rates against DoS, DDoS, and spoofing attacks. The proposed framework achieves an overall attack detection accuracy of 99.1%, implemented and experimented using Python, revealing its promise as a secure, scalable solution for future wireless secure communication networks.
Read MoreDoi: https://doi.org/10.54216/JISIoT.180102
Vol. 18 Issue. 1 PP. 12-33, (2026)
Due to the increasing prevalence of network attacks, maintaining network security has become significantly more challenging. An Intrusion Detection System (IDS) is a critical tool for addressing security vulnerabilities. IDSs play a vital role in monitoring network traffic and identifying malicious activities. However, two major challenges hinder IDS performance: data imbalance, which weakens the detection of minority class attacks, and overfitting in traditional classifiers such as Support Vector Machines (SVM). This study proposes a novel and transparent IDS framework that integrates several advanced techniques: Variational Autoencoder (VAE) for data augmentation, Mutual Information-based feature selection, Harris Hawks Optimization (HHO) for hyperparameter tuning of the SVM, and SHAP (SHapley Additive exPlanations) for interpretability. VAE is utilized to generate synthetic instances for minority classes, effectively addressing class imbalance. Feature selection is employed to reduce dimensionality and enhance generalization performance. The HHO algorithm is used to adaptively tune the hyperparameters of the SVM, thereby optimizing classification accuracy while mitigating overfitting. Finally, SHAP values are employed to interpret the SVM’s decisions, enhancing the transparency and trustworthiness of the system. Experimental evaluations conducted on two benchmark IDS datasets, UNSW-NB15 and NSL-KDD, demonstrate that the proposed VAE-HHO-SVM framework outperforms existing models in terms of accuracy, robustness, and interpretability. The results confirm the effectiveness of combining optimization, explainable AI, and data balancing strategies in modern IDS development. Specifically, the proposed method achieves an accuracy of 98.42% on the NSL-KDD dataset and 97.45% on the UNSW-NB15 dataset—an improvement of 3.17% over other methods.
Read MoreDoi: https://doi.org/10.54216/JISIoT.180103
Vol. 18 Issue. 1 PP. 34-47, (2026)
Malicious activities that seek to disrupt cloud communication are cybersecurity threats. Nevertheless, none of the existing works focused on detecting the attacks that happened in the Blade Server (BS) in the cloud. Therefore, this paper proposes an efficient Intrusion Detection System (IDS) framework for BS in the cloud by utilizing Kerberos-based Exponential Mestre-Brainstrass Curve Cryptography (KEMBCC) and Sechsoftwave and Sparsele-centric Gated Recurrent Unit (SSGRU). Primarily, the cloud users are registered into the network. Then, the incoming data are encrypted. Here, to balance the incoming loads, BS is used. To detect attacks in BS, IDS is implemented. Initially, the data are preprocessed. Further, the big data are handled in the IDS. Afterward, the features are extracted and optimal features are chosen from it. Thereafter, to classify the attack and normal BS, the SSGRU classifier is used. After that, by generating a Sankey diagram, the attacked and non-attacked blades in the BS are differentiated. Next, the attacked blades are isolated, whereas the non-attacked blades are further used for load balancing on the cloud. According to the analysis results, this model performed superior to the other models by attaining an accuracy of 99.43%.
Read MoreDoi: https://doi.org/10.54216/JISIoT.180104
Vol. 18 Issue. 1 PP. 48-63, (2026)
The reliable estimation of evaporation is essential for proper water resource planning, particularly in scenarios governed by climatic variability. This work proposes the application of advanced deep learning methods—namely Long Short-Term Memory (LSTM), Bidirectional LSTM (BiLSTM), and Gated Recurrent Unit (GRU)—optimized by the Gray Wolf Optimization (GWO) algorithm in predicting monthly evaporation values over Almaty, Kazakhstan. Furthermore, the models were optimized for best performance through the adjustment of key hyperparameters such as the number of hidden units, dropout rates, and learning rates. Among candidate models for evaluation, the optimal model with smallest MSE (0.6162) and maximum value of R-squared (0.9335) was LSTM-GWO, indicating strong correlation with actual values. Performance measures such as RMSE, MAE, and MAPE strongly indicated the improved generalization strength of LSTM-GWO compared to BiLSTM and GRU. Forecasts for 2023 indicated seasonal patterns persistently expressed as maximum evaporation during summer seasons. The results detail the potential of deep learning algorithms tuned to improve the precision of hydrological forecasting specifically for semi-arid areas.
Read MoreDoi: https://doi.org/10.54216/JISIoT.180105
Vol. 18 Issue. 1 PP. 64-79, (2026)
As people increasingly rely on computers to store sensitive information and interact with various technologies, the need for low-cost, effective security measures has become more critical than ever. One such method is keystroke dynamics, which analyzes a person’s typing rhythm on digital devices. This behavioral biometric approach enhances the security and reliability of user authentication systems and contributes to improved cybersecurity. This study aims to reduce authentication risks by encouraging the adoption of keystroke-based verification methods. The research uses a fixed-text password dataset (.tie5Roanl), collected from 51 users who typed the password over eight sessions conducted on alternating days, capturing variations in mood and typing behavior. Seven models were developed, each following a structured seven-phase process. The first phase involved loading the CMU Keystroke Dynamics Benchmark dataset. The second focused on data preprocessing. In the third phase, new keystroke features were engineered from the original dataset. The fourth phase involved feature selection across various types: unigraph (Hold), digraph (Down-Down, Down-Up, Up-Down, Up-Up), trigraph (Hold-Tri), and their combinations. Training and testing were conducted in the fifth and sixth phases using a Support Vector Machine (SVM) classifier, leveraging keystroke patterns for behavioral biometric identification. The final phase focused on evaluating the models. Each model was tested under two scenarios: one where only the first user is treated as the authorized user, and another where the first three users are considered authorized. Each scenario was further divided into two cases based on preprocessing conditions. The models were assessed using multiple performance metrics, including Accuracy, F1-Score, Recall, Precision, ROC-AUC, and Equal Error Rate (EER). The highest achieved results were Accuracy of 99.35%, F1-Score of 94.2%, Recall of 91.8%, Precision of 98.8%, ROC-AUC of 99.56%, and a minimum EER of 0.02. These outcomes demonstrate the effectiveness of the proposed approach in enhancing authentication reliability using keystroke dynamics.
Read MoreDoi: https://doi.org/10.54216/JISIoT.180106
Vol. 18 Issue. 1 PP. 80-102, (2026)
Currently, images stand for a highly common form of communication, whether through teleconferencing, mobile communication or social media. The identification of counterfeit images is intrinsic because it is crucial that the images used for communication be genuine and original. Images are fabricated referring to the fact that it is challenging to set the difference between a tampered image and the real image. This refers notably to the myriad technological, moral, and judicial implications connected with advanced image editing software. The majority of handcrafted traits are used in traditional approaches for detecting image counterfeiting. The problem with many of the image tampering detection methods now in use resides in the fact that they are confined to identifying particular types of alteration by looking for particular features in the images. Image tampering is currently recognized through deep learning techniques. These methods proved to be promising and worthwhile as they perform better than traditional ones since they can extract complex components from images. As far as this research paper is concerned, we provide a thorough review of deep learning-based methods for detecting splicing images, along with the pertinent results of our survey in the form of findings and analysis.
Read MoreDoi: https://doi.org/10.54216/JISIoT.180107
Vol. 18 Issue. 1 PP. 103-113, (2026)
Deep studying architectures face fundamental demanding situations in balancing overall performance optimization, computational scalability, and operational interpretability. Current strategies show off an essential fragmentation: neural architecture search (NAS) techniques perform independently of interpretability requirements, while scalability answers remain detached from structure optimization pipelines. This disconnect hinders the improvement of a unified workflow from architecture layout to interpretable deployment. We endorse DeepOptiFrame, a TensorFlow/Keras-primarily based Python framework that combines three middle capabilities: (1) superior optimization algorithms (BOHB, Hyperband) with useful resource-restrained multi-objective search, (2) distributed training acceleration across GPU/GPU clusters via Horovod integration and blended-precision strategies, and (3) GPU-increased interpretability gear (SHAP, LIME) incorporated without delay into the education pipeline. Our framework demonstrates large experimental improvements: a 15-20% accuracy growth at the CIFAR-a hundred and ImageNet benchmarks compared to today's baselines, a 65% education speedup whilst scaled to eight GPUs with close to-linear performance, and a 30% development in interpretability reliability, as measured via the Mean Confidence Decrease metric. This implementation additionally reduces reminiscence intake via forty% throughout gradient checkpoints even as keeping numerical balance. These advances establish a new paradigm for coherent deep learning development, simultaneously improving overall performance, scalability, and transparency inside unified workflow surroundings.
Read MoreDoi: https://doi.org/10.54216/JISIoT.180108
Vol. 18 Issue. 1 PP. 114-125, (2026)
The rise of IoT in smart healthcare systems necessitates secure and efficient methods to protect sensitive medical imaging data transmitted across interconnected devices. This research introduces a novel IoT-enabled reversible watermarking technique using Principal Component Analysis (PCA) and Hash-Based Signatures (HBS) to ensure both data integrity and diagnostic quality. The method supports secure embedding of watermarks into medical images captured and transmitted by IoT devices such as wearable scanners, remote diagnostic units, and edge sensors. By leveraging PCA for minimal distortion and reversible embedding, and HBS for robust tamper detection, the system ensures full restoration of original images post-verification. Discrete Wavelet Transform (DWT) further optimizes the compression and transformation for real-time IoT environments. The proposed approach demonstrates high imperceptibility (high PSNR), robust tamper detection (using SHA-256 and SHA-512), and full reversibility, making it ideal for real-time transmission of medical data over IoT-based healthcare networks.
Read MoreDoi: https://doi.org/10.54216/JISIoT.180109
Vol. 18 Issue. 1 PP. 126-139, (2026)
Due to the huge number of devices that connect to Internet of Things (IoT) networks, these networks have become the main nerve of the organizations that use them due to the large services that the networks provide to companies. In recent years, the number of attacks targeting IoT networks to shut down or violate data privacy has increased, so system developers must build strong protection systems to keep those networks secure. Intrusion detection systems (IDS) and intrusion prevention systems (IPS) are one of the most promising protection systems in securing these networks, but they suffer from several challenges, including high false positive alarms (FPA) and false negative alarms (FNA), in addition to the difficulty of controlling the long-time chains of incoming and outgoing traffic in IoT networks. This paper presents a distributed intrusion detection system (DIDS) based on the use of deep learning algorithms, specifically the enhanced long short-term memory (LSTM) algorithm with the gradient repeating unit (GRU) algorithm, as well as the use of a modern dataset collected from real network data called CICIOT2023. To adjust the threshold and achieve a balanced approach to the detection of anomalies, a hybrid model of the Enhanced Peak Density (DPC) aggregation algorithm with ROC curve analysis was used. The proposed work's main innovation is the combination of top-k feature selection with a hybrid LSTM-GRU architecture optimized for imbalanced datasets using focal loss, SMOTE, and dynamic class weighting. As a result, the intrusion detection pipeline is strong and effective. To evaluate the functioning of the system, standard performance metrics such as AUC-ROC, accuracy, F1-score, and recall were used, as the proposed system proved to be a powerful solution to prevent complex attacks targeting IoT networks as well as the possibility of detecting rare and modern attacks. The proposed model achieved promising results with accurate results reaching (96.0%) and a false negative rate (FNR) of 0.049% and a false positive rate (FPR) of 0.014%.
Read MoreDoi: https://doi.org/10.54216/JISIoT.180110
Vol. 18 Issue. 1 PP. 140-149, (2026)