Journal of Intelligent Systems and Internet of Things

Journal DOI

https://doi.org/10.54216/JISIoT

Submit Your Paper

2690-6791ISSN (Online) 2769-786XISSN (Print)

Resources Management Consıderıng Envıronmental Condıtıons in Educatıonal Instıtutıons Based on IoT

M. E. ElAlami , M. M. Ghoniem , Asmaa E. El-Maghraby

One of the most significant issues affecting the majority of countries in the world today is resource conservation. Water is the most vital component for all life, hence protecting it is crucial. Optimal use of water maintains its sustainability and leads to energy savings. Educational institutions are considered among the largest institutions that use water because of the presence of large numbers of students and employees. This research concerned resource management in educational institutions taking into account environmental conditions based on Internet of Things (IoT). The results illustrated that the designed monitoring system for moisture content has the ability to enhance water sustainability by using the optimal water content. A significant efficiency of the proposed monitoring system in controlling the water level was achieved. The maximum error between the monitoring system reading and the actual reading was 2% and 2.44% for moisture content and water level, respectively. The results showed the sensor's high sensitivity to rainfall and the ability of the proposed monitoring system to save water that exceeds the need of soil

Read More

Doi: https://doi.org/10.54216/JISIoT.160201

Vol. 16 Issue. 2 PP. 01-12, (2025)

An Effective IoT based Vein Recognition Using Convolutional Neural Networks and Soft Computing Techniques for Dorsal Vein Pattern Analysis

Krishna Bhimaavarapu , Bylapudi Rama Devi , Chandra Bhushan Mahato , Lakshmi Chandrakanth Kasireddy , M. Vadivukarassi , P. Sivaraman

In this research, we provide a CNN-based system that can reliably identify the dorsal veins of the hand. In order to get better results on different picture quality datasets, the suggested model makes use of refined variants of the pre-trained VGG Net-16 and VGG Net-19 designs. We use the BOSPHORUS dataset, which provides medium-quality photos, in addition to two self-constructed datasets that provide good- and low-quality images. By using state-of-the-art augmenting image methods, streamlined pre-processing procedures, and meticulously designed CNN designs, the fine-tuned VGG Net-16 model achieves superior performance in comparison to all other models. Using ROI pictures with a resolution of 224×224 pixels, a multi-class technique is employed for arranging the vein patterns. Improving data quality during training makes the approach more broad, which helps prevent over fitting. On every dataset, the proposed method achieves better results than standard ML models like K-NN and SVM, and the experimental outcomes demonstrate significant improvements in accuracy. The modifying process led to a considerable decrease in the equal error rates (EER) when compared to benchmark methods. The structure enhances efficiency in computing with GPU-accelerated studying. It was built with the help of Python extensions like as OpenCV, Keras, and TensorFlow. Results from extensive testing of the proposed method show an accuracy of 99.98%, a precision of 98.98%, and a recall of 98.8%. From what we can see, the technique is both adaptable and dependable; making it well suited for use in practical biometrics vein recognition applications.

Read More

Doi: https://doi.org/10.54216/JISIoT.160203

Vol. 16 Issue. 2 PP. 26-41, (2025)

Transforming Public Health with AI and Big Data Deep Learning for COVID-19 Detection in Medical Imaging

Md Jabed Hussain , Awakash Mishra

For public health systems worldwide, the COVID-19 epidemic has presented hitherto unheard-of difficulties. Rapid and accurate virus detection is essential for successful treatment and containment. This paper explores the transformative potential of Artificial Intelligence (AI) and Big Data in public health, focusing on applying deep learning techniques for COVID-19 detection in medical imaging. We discuss the integration of AI-driven solutions in healthcare, the role of big data in enhancing diagnostic accuracy, and the implications for future public health strategies. The COVID-19 pandemic started in Dec 2019 and has wreaked havoc on our lives ever since. One such youngest addition to the coronavirus family has claimed the lives of almost half the world's population. With the introduction of constantly evolving forms of this infection, locating the infection early on would still be essential. Even though the PCR test is the best and most utilized approach for identification, non-contact procedures such as chest radiography and CT scans have always been recommended. In this context, artificial intelligence is integral to the early and precise diagnosis of COVID-19 via lung image processing. The primary aim of this study is to evaluate and contrast multiple deep learning improved strategies for detecting COVID-19 in CT and X-Ray medical images. We employed four strong CNN methods for the COVID-19 images of the binary classification challenge: ResNet152, VGG16, ResNet50, and DenseNet121. The suggested Attention-based ResNet framework is created to choose the appropriate architecture and training settings for models automatically. In the diagnosis of COVID-19 utilizing CT-scan images, the accuracy and F1-score are over 96 percent. In addition, transfer-learning methods were used to address the lack of information and shorten the training time. Enhanced VGG16 deep transfer learning design was used to accomplish multi-class categorization of X-ray imaging tasks. Enhanced VGG16 was shown to have 99 percent accuracy in detecting X-ray imaging from three classes: Normal, COVID-19, and Pneumonia. The algorithms' accuracy and validity were tested on well-known public datasets of X-ray and CT scans. For COVID-19 diagnosis, the presented approaches outperform previous methods in the literature. In the fight against COVID-19, we believe our research will aid virologists and radiologists in making better and faster diagnoses.

Read More

Doi: https://doi.org/10.54216/JISIoT.160204

Vol. 16 Issue. 2 PP. 42-59, (2025)

A Memory Efficient Adversarial Attention Tree-Structured Deep Learning Model for Classification

Nirmala Veluswamy , Jayanthi Boopathy

The representational and learning power of tree-based deep-learning (DL) classification models makes them a popular choice for dimensional sentiment analysis (DSA). One variant, Tree-structured Convolutional neural network with long short-term Memory (TCL) stands out among many others for its ability to handle uncertainties and unexpected changes in input data while still producing promising Valence-Arousal (VA) predictions for text or image classes. However, the high memory complexity of this model becomes a challenge when dealing with large image/text datasets. To address this issue, this manuscript introduces a Lightweight Adversarial Attention TCL (LAATCL) model for DSA. The proposed model includes a clustering layer in conjunction with the ATCL to decrease memory complexity and enhance performance through reliable sample selection. This model comprises multi-convolution with a clustering layer that utilizes Group-Sparse Non-negative Matrix Factorization (GSNMF) for clustering highly correlated samples. By learning informative and discriminative latent variables across labels, GSNMF helps identify and select samples closest to the cluster centroid for input to the LSTM network, resulting in reduced memory complexity and improved accuracy. The LATCL model outperformed traditional models in experiments conducted on the SST and CIFAR-10 datasets, with accuracies of 93.57% and 95.25%, respectively, demonstrating its usefulness.

Read More

Doi: https://doi.org/10.54216/JISIoT.160205

Vol. 16 Issue. 2 PP. 60-67, (2025)

Leveraging Quantum Neural Networks with Deep Learning Based Edge Detection Model for Breast Cancer Screening using Digital Mammograms

S. Abdel-Khalek

Breast cancer (BC) is one of the most common invasive cancers, which cause thousands of women's deaths globally. Therefore, prompt detection is a cure for reducing the rate of death. Therefore, screening of BC in its initial phase is of utmost vital. Physically segmenting breast lesion imaging appears a time-consuming and expensive pursuit for radiologists. Hence, the adoption of automatic analytic techniques becomes vital, directing to exactly segment lesions of the breast and mitigate the associated tasks. The segmentation of malignant areas is an essential procedure in the complete inspection of breast image data. To achieve the segmentation and recognition of BC, numerous computer-aided diagnosis (CAD) techniques were presented for the investigation of mammogram imaging. The CAD models are employed to mainly analyze the disorder and provide the best treatment. Currently, deep learning (DL) techniques are superior and provide promising results in the early recognition of BC. In this paper, we design a Leveraging Quantum Algorithms for Edge Detection in Mammograms to Improve Breast Cancer Screening (LQAEDM-IBCS) model. The main intention of the LQAEDM-IBCS is to provide an accurate and effective technique for the detection and segmentation process of breast cancer using advanced algorithms. Initially, the image pre-processing stage applies the adaptive bilateral filtering (ABF) method to eliminate the unwanted noise in input image data.  Next, the segmentation process is implemented by the Otsu threshold method for edge detection. To improve the segmentation performance, the parameter tuning process is performed through the quantum spotted hyena optimizer (QSHO) algorithm. Besides, the proposed LQAEDM-IBCS technique designs the DenseNet-121 method for the extraction of feature procedure. Eventually, the quantum neural network (QNN) method has been deployed for the BC classification process. The simulating validation of the LQAEDM-IBCS system is verified on a benchmark image database and the outcomes are dignified under numerous measures. The experimental outcome emphasized the enlargement of the LQAEDM-IBCS approach in the BC diagnosis process.

Read More

Doi: https://doi.org/10.54216/JISIoT.160206

Vol. 16 Issue. 2 PP. 68-81, (2025)

Real-Time Classroom Emotion Analysis Using Machine and Deep Learning for Enhanced Student Learning

Deepa Devasenapathy , Krishna Bhimaavarapu , Prem Kumar Sholapurapu , S. Sarupriya

    This research creates an innovative EfficientNet-B7-based Facial Expression Recognition model that delivers maximum accuracy performance for detecting emotions. Successful classification performance benefits substantially from EfficientNet-B7's application of compound scaling techniques which balances the entire network dimensions depth width and resolution. The characteristic distinctive to EfficientNet-B7 over standard architectural models involves its dual capability to perform accurate computations at reduced complexity levels. The model receives evaluation using KDEF at high-resolution as well as FER2013 at low-resolution through usage of SGD, Adam, and RMSprop optimizers. Experimental tests confirmed that EfficientNet-B7 operates with RMSprop optimizer to recognize emotions on KDEF at 91.78% accuracy superior to ResNet152's highest recorded accuracy of 88.77%. Performance levels declined to 57.56% on FER2013 because low-resolution images represent a great challenge to the model. Internal Batch Normalization (IBN) enters the model as an issue solution to halt gradient descent problems, which results in better model training stability and enhanced accuracy-loss patterns. The research demonstrates that FER performance benefits greatly when EfficientNet-B7 works in combination with IBN for high-resolution image processing. The research proves that EfficientNet-B7 stands as a reliable FER solution that shows potential usage in affective computing and human-computer interaction domain.  

Read More

Doi: https://doi.org/10.54216/JISIoT.160207

Vol. 16 Issue. 2 PP. 82-101, (2025)

Research on the Evaluation Method of Energy Sustainable Development Indicator System Based on Genetic Algorithm and Local Support Vector Regression

Qian Chen

With the acceleration of modern urbanisation, the demand for energy by the state and society is increasing. In order to maintain the sustainable availability of energy, it is necessary to establish an energy sustainability indicator system. To address this issue, this paper proposes an innovative evaluation method for energy sustainability indicator system, which aims to provide a multi-scale and more comprehensive assessment of energy sustainability indicators, as well as to ensure the accuracy and reliability of the evaluation results. This paper proposes to use genetic algorithm and local support vector regression (SGA-LSVM) to optimise the projective fuzzy clustering solution model (PPFCM), to establish a new evaluation method of energy sustainability index system based on genetic algorithm and local support vector regression. Based on this method, energy sustainability in different regions is analyzed according to three indicators: energy supply side, demand side and affordability, and the validity of this evaluation method is tested. The study found that, in terms of zoning: the eastern region is in the lead in energy demand side, energy supply side and energy affordability, and the western region has a rising trend in recent years; in terms of population density: the indices of energy demand side, energy supply side and energy affordability of densely populated areas are much higher than the rest of the areas compared to the sparsely populated areas, and the difference between the indices of energy supply side and energy affordability of the sparsely populated and moderately populated areas and the difference between the indices is not significant. The energy supply-side index is slightly higher than that of the medium-population area; Economy and carbon emission: due to China's focus on environmental protection, carbon emission is kept within a stable range while the economy is developing rapidly. By PC≥0.80, PE≥0.45 and XB≤0.1, it shows that the method of evaluating the energy sustainable development index system using the fuzzy projection-seeking clustering energy sustainable development evaluation model based on genetic algorithm and local vector regression optimization is reliable.

Read More

Doi: https://doi.org/10.54216/JISIoT.160208

Vol. 16 Issue. 2 PP. 102-116, (2025)

Research on Image Generation Style Transfer and Reconstruction Loss Reduction Based on Deep Learning Framework

Wei Zou , Mohd Alif Ikrami Bin Mutti

Nixi black pottery has a unique place in Chinese black pottery art. In this article, we have developed a style transfer model based on deep learning, which automatically transforms Nixi black pottery into images of other styles. This is of great value for the dissemination of this art. In this paper, we propose a method called DualTrans that utilizes a pure Transformer architecture to enable context-aware image processing, effectively addressing the issue of low receptive field. Additionally, we introduce a Location Information Encoding Module (LIM) and a Style Transfer Control Module (STCM) to tackle the problem of long-range dependencies while ensuring that the generated target image remains structurally and stylistically consistent throughout the style transfer process, without being influenced by the content and style images. During the mapping process, the LIM encodes the original image block information and concatenates it with the projected image block information. To alter the final produced style of the picture, the STCM leverages a set of learnable style-controllable factors. Extensive trials have shown that DualTrans exceeds previous approaches in terms of stability.

Read More

Doi: https://doi.org/10.54216/JISIoT.160209

Vol. 16 Issue. 2 PP. 117-122, (2025)

Comment Feedback Optimization Algorithm (CFOA): A Feedback-Driven Framework for Robust and Adaptive Optimization

El-Sayed M. El-kenawy , Amel Ali Alhussan , Doaa Sami Khafaga , Amal H. Alharbi , Sarah A. Alzakari , Abdelaziz A. Abdelhamid , Abdelhameed Ibrahim , Marwa M. Eid

The Comment Feedback Optimization Algorithm (CFOA) presented a novel feedback-driven model for solving optimization problems, incorporating ideas based on positive and negative feedback loops. Unlike other optimization algorithms, CFOA includes feedback adjustments for better tuning the exploration-exploitation trade-off, thus making CFOA less sensitive to the dimensions of problems and their nonlinearity. Some proposed features include feedback dynamics for adaptive search options, parameter control by a decay function, and mechanisms for escaping local optima. CFOA’s performance has been benchmarked on CEC 2005 test cases with many evaluations. The results demonstrate better convergence speed, solution quality, and computational complexity compared with the Sine Cosine Algorithm (SCA), Gravitational Search Algorithm (GSA), and Tunicate Swarm Algorithm (TSH). The efficiency of the approach used by CFOA makes it an indispensable tool for solving real-world optimization problems across various application domains such as machine learning, engineering, and logistics.

Read More

Doi: https://doi.org/10.54216/JISIoT.160210

Vol. 16 Issue. 2 PP. 123-141, (2025)

Greylag Goose Optimization for Diabetes Prediction: Feature Selection Meets Advanced Machine Learning

Gomaa Mohamed Ismail , El-Sayed M. El-kenawy , Shady Y. El-Mashad

Diabetes mellitus remains a global health concern, necessitating both accurate and effective diagnostic methodologies. This condition presents a significant challenge due to the high dimensionality of clinical datasets and the inherent complexity of diabetes classification. To address this problem, this study integrates feature selection and machine learning architectures to enhance diabetes prediction accuracy. A novel framework based on the Binary Greylag Goose Optimization (bGGO) algorithm is proposed to optimize feature selection, thereby improving classification performance. A comprehensive evaluation uses multiple classifiers, including Decision Trees, k-nearest Neighbors, Support Vector Machines, Random Forests, and Multilayer Perceptron (MLP). The experimental results demonstrate that bGGO significantly enhances feature selection quality, improving classification metrics, particularly for MLP, which achieves the highest classification accuracy of 95.98%. These findings underscore the efficacy of combining metaheuristic optimization with machine learning for diabetes diagnosis, offering a scalable and interpretable approach for real-world healthcare applications. The proposed methodology contributes to more precise risk estimation and the development of individualized intervention strategies, facilitating early diagnosis and effective disease management.

Read More

Doi: https://doi.org/10.54216/JISIoT.160202

Vol. 16 Issue. 2 PP. 13-25, (2025)

Deep Learning-based sensitive data detection with optimization-enabled secure encryption model for data privacy preservation in IoT

Mathias Agbeko , Disha Handa

The express expansion of the Internet of Things (IoT) has led to an exponential increase for data being generated and transmitted from various connected devices. This poses significant challenges in terms of data privacy and security, as unauthorized access to such sensitive information can have severe consequences like identity theft or financial fraud. This research proposes a model for sensitive data detection and protection in IoT, based on deep learning and optimization-enabled secure encryption. By combining deep learning-based sensitive data detection and optimization-enabled secure encryption, this model offers a comprehensive solution to preserve data privacy in IoT. The proposed model uses a novel and secure encryption algorithm, ensuring the privacy of the data. An algorithm, Improved Skill Optimization Algorithm (ISOA), which enhances the performance of existing optimization algorithms by incorporating the concept of Double Exponential Smoothing (DES), is proposed for the secure key generation for the data encryption. Data Encryption Standard (DES) is a block cipher algorithm that encrypts and decrypts data using a 56-bit key and 64-bit blocks. The proposed model provides a robust solution for data privacy preservation in IoT networks, which is crucial for protecting sensitive information from unauthorized access and data breaches. The proposed algorithm's performance analysis is evaluated using metrics, like computation time, memory, and fitness function. Results indicate that proposed ISOA based encryption model succeeded a greater performance, with a memory of 0.5170 MB, computational time of 1126.47 sec and fitness value of 1.3630.

Read More

Doi: https://doi.org/10.54216/JISIoT.160211

Vol. 16 Issue. 2 PP. 142-157, (2025)

Integrating Tent Chaotic Dung Beetle Optimization with Deep Ensemble Learning for Diabetic Retinopathy Recognition on Fundus Imaging

Arwa Darwish Alzughaibi , Ashrf Althbiti , Sultan Ahmed Almalki , Mohammed Al-Jabbar , Mohammed Alshahrani

Diabetic Retinopathy (DR) is a general difficulty of diabetes mellitus, resulting in retina damage that affects vision. If left undetected, it has the potential to cause blindness. Regrettably, DR is irreversible, and only treatment can maintain vision. The early analysis and treatment of DR can considerably decrease the potential for visual impairment. Unlike computer-aided diagnosis (CAD) systems, the manual diagnostics method of DR retinal images by ophthalmologists is effort-, cost-, and time-consuming and liable to misdiagnoses. In present scenario, deep learning (DL) has become the classical approach that has remarkable performance in different fields, mainly in medical image classification and analysis. Convolutional neural networks (CNN) are more commonly deployed as a DL system in medical image analysis and they are very efficient. In this manuscript, we offer the design of Tent Chaotic Dung Beetle Optimization with Deep Ensemble Learning for Diabetic Retinopathy (TCDBO-DELDR) Recognition approach on Fundus Imaging. The foremost intention of the TCDBO-DELDR technique is to automate the DR detection process on fundus images via the ensemble DL model. To eradicate the noise, the TCDBO-DELDR technique initially exploits the median filtering (MF) methodology. In the TCDBO-DELDR model, the Inception v3 (IV3) model is employed for the purposes of feature extractor. For the hyperparameter tuning procedure, the TCDBO technique is used for IV3 model. Finally, the detection of DR is carried out utilizing an ensemble of three classifiers namely Deep Feedforward Neural Network (DeepFFNN), Convolutional FFNN (ConvFFNN), and Convolutional bi-directional long short-term memory (ConvBLSTM). For ensuring the enhanced efficiency of the TCDBO-DELDR system in the DR detection procedure, a widespread experimental study is prepared on the benchmark DR database. The results illustrate the superior efficiency of the TCDBO-DELDR technique with other recent DL approaches.

Read More

Doi: https://doi.org/10.54216/JISIoT.160212

Vol. 16 Issue. 2 PP. 158-173, (2025)

Blockchain with IoT Integrated Framework for Tourism Service Customization and Management

Samer Yaghmour

The Internet of Things (IoT) has extensively converted the industry of tourism, reforming travel design, supply, and experiences. The technology of Blockchain (BC) signifies a paradigm shift with the latent to transform many industries, more like spreadsheets altered office efficiency. BC technology provides frequent potential advantages to the tourism industry, with enhanced transparency, security, and efficacy in regions such as payments, bookings, and identity verification, which potentially mains to a more perfect and reliable travel experience. In the tourism region, BC with IoT is mainly attractive owing to the latent benefits it provides in terms of improving the experience of tourism, enhancing operational efficacy, and guaranteeing data security and transactions. Recently, numerous scholars globally have employed deep learning (DL) technology in the industry of tourism to combine physical and social influences for improved travel recommendation services. This study presents a Blockchain for Tourism Service Customization and Management using Whale‐goshawk Optimization Algorithm (BCTSCM-WOA) technique. The main goal of the BCTSCM-WOA method relies on improving the effectual model for tourism service customization. Initially, blockchain technology is applied to provide secure, transparent, and decentralized solutions for handling traveler data, payments, and service personalization. Then, the data pre-processing employs min‐max scaling to transform input data into a suitable format. Besides, the crayfish optimization algorithm (COA) to select the most relevant features from the data has executed the feature selection procedure. For the classification process, the proposed BCTSCM-WOA method projects multi-dimensional attention-spiking neural network (MASNN) technique. At last, the parameter tuning process is performed through the whale‐goshawk optimization (WGO) algorithm for refining the classification performance of MASNN model. The experimental evaluation of the BCTSCM-WOA algorithm has been examined on a benchmark dataset. The extensive outcomes highlight the significant solution of the BCTSCM-WOA approach to the classification process when compared to existing techniques.

Read More

Doi: https://doi.org/10.54216/JISIoT.160213

Vol. 16 Issue. 2 PP. 174-186, (2025)

SCNN-UNet: A Novel Deep Learning Approach for Pulmonary Embolism Detection in COVID-19 Patients Using Super Pixel Segmentation

Sukhider Bir , Vijay Dhir

Inventory management is crucial for optimizing consumer demand and supply chains in e-commerce companies.  This is the stage at which precise inventory forecasting becomes necessary for forecasting future demand patterns and stock levels.  Traditional forecasting methods often struggle with e-commerce data due to seasonality, sudden changes in customer behavior, and nonlinearity.  Machine learning (ML) and deep learning (DL) techniques have become powerful weapons for inventory prediction because they can analyze huge amounts of data with high dimensionality. E-commerce firms can improve their resource allocation, inventory management, and customer experience in highly competitive market environments.  This paper proposes different types of inventory forecasting models and mainly evaluates the applicability of sophisticated machine learning algorithms.  While we commonly use old methods like Random Forest, ARIMA, and MLPs, they often lack the necessary robustness to nonlinearity within inventory data.  To address these problems, we introduce a novel method that combines convolutional neural networks (CNN) and XGBoost called CNN-XGBoost, which provides better feature extraction than the conventional prediction model and regression performance.  We then compared CNN-XGBoost's performance to traditional forecasting methods (another common approach to contextualizing predictive model performance) using a 52-week simulated dataset in which we mimic patient data growing over time.  We used key performance metrics such as R2, mean squared error (MSE), and mean absolute percentage error (MAPE) to assess each model's accuracy.  The CNN-XGBoost model performed much better than others, with an R2 of 0.78, which means our proposed model can explain more variation compared to other competitors, as depicted in the results section.  It also had the best MSE of 0.15, indicating better predictive performance.  The CNN-XGBoost model demonstrated promising prospects as a robust inventory forecasting tool for commerce despite its slightly higher MAPE value (0.69), suggesting some vulnerability to outlier data points.  This study demonstrates the potential of using a convolutional neural network in combination with gradient boosting techniques to tackle the complexity of stock management issues and the fact that it outperforms based line methods by a large margin.

Read More

Doi: https://doi.org/10.54216/JISIoT.160214

Vol. 16 Issue. 2 PP. 187-201, (2025)

Alzheimer Detection Using Deep Learning Methods

Raghad K. Mohammed , Mohammed Q. Jawad , Othman Mohammed Jasim

This study proposes a deep learning-based framework to detect and classify Alzheimer's disease (AD) in the early stages using medical imaging, and specifically Magnetic Resonance Imaging (MRI). Specifically, we propose a Convolution Neural Network (CNN) based model and transfer learn (MobileNet) through pre-trained models based on task domain to improve model performance on binary AD classification. Thanks to minimizing computational complexity and memory costs, the model with 99.86% accuracy rate can mitigate overfitting and is an ideal approach for real time and eco-friendly monitoring of AD evolution. The findings suggest that the model could help clinicians in diagnosing AD even based on MRI images, which has great potential as a scalable and efficient solution for the early-stage diagnosis and classification of the disease. Our work will include the addition of further pre-trained models, increased dataset size via data augmentation, and the application of MRI segmentation to better isolate some of the key features of Alzheimer.

Read More

Doi: https://doi.org/10.54216/JISIoT.160215

Vol. 16 Issue. 2 PP. 202-213, (2025)

Enhancing Osteoporosis Detection with Fuzzy Logic Preprocing and Pre-Trained Deep Convolutional Neural Networks

Woud Majid Abed , Murtadha M. Hamad , Azmi Tawfeq Hussein Alrawi

This study investigates combining fuzzy logic with deep learning methodologies in classifying X-ray images for osteoporosis detection. Osteoporosis, defined by compromised bone integrity and heightened fracture susceptibility, requires prompt and precise diagnosis for effective treatment. We devised a hybrid approach that amalgamates transfer learning from Convolutional Neural Network (CNN) architectures, including MobileNetV2, AlexNet, ResNet50V2, and Xception, utilizing fuzzy logic during the preprocessing phase to address uncertainty and imprecision in X-ray images, thereby enhancing the quality of the input data for the subsequent pre-trained models. The research entailed the examination of a significant dataset of X-ray images and the implementation of the proposed methodology to categorize images as osteoporotic or non-osteoporotic, attaining a remarkable accuracy of 99.68% and a receiver operating characteristic (ROC) of 100% through the integration of fuzzy logic preprocessing with ResNet50V2. This innovative method may substantially decrease diagnostic inaccuracies and enhance patient outcomes, facilitating additional research and development in applying deep learning techniques in healthcare.

Read More

Doi: https://doi.org/10.54216/JISIoT.160216

Vol. 16 Issue. 2 PP. 214-235, (2025)

A Hybrid Encryption Model with Blockchain Integration for Secure Cloud Data Storage and Retrieval

Firas Mohammed Khalaf , Ali Makki Sagheer

  Data security, privacy, sensitivity, and integrity are major concerns when using cloud-based storage solutions. In this paper, we propose a hybrid encryption model that has been integrated with blockchain technology to secure data storage in the cloud. The proposed model facilitates data encryption using a symmetric cryptography algorithm for efficient large data encryption while ensuring the encryption key can only be exchanged using asymmetric cryptography. This model utilizes the power of blockchain to manage metadata securely and associated encryption keys to ensure that records are tamper-proof, removing the need for third parties to be trusted. The security, key management, and data integrity of the proposed model are better than traditional cloud storage and existing blockchain-based approaches. The performance evaluation suggests that the model achieves a balance between security and cost efficiency, while moderate transaction speed will be witnessed owing to blockchain operations. Our proposed work aims to provide a scalable, fast, reliable, and decentralized architecture-based solution to address the challenges of cloud data security.  

Read More

Doi: https://doi.org/10.54216/JISIoT.160217

Vol. 16 Issue. 2 PP. 236-245, (2025)

Efficient Plant Disease Detection Using Lightweight Deep Learning Model

Abdalrahman Fatikhan Ataalla , Karam Hatem Alkhater , Qusay Hatem Alsultan , Zaid Sami Mohsen , Munther Naif Thiyab , Mohammed Waheeb Hamad , Ahmed Jumaah Yas

Early detection of plant diseases is critical to minimizing their adverse effects on agricultural productivity. In particular, machine vision and deep learning approaches (e.g., convolutional neural networks, CNNs) have been increasingly applied for automatic plant disease identification. Although existing deep learning models achieve satisfying classification accuracy, they often consist of millions of parameters that significantly lead to the lengthy training time, prohibitive calculation costs and deployment obstacles at the resource-constrained edge devices. In order to overcome those constraints, we introduce a new deep learning architecture, which uses adaptations of Inception layers and residual connections that can help both with feature extraction and efficiency. In addition, depthwise separable convolutions are used to drastically reduce the amount of trainable parameters with small loss of representational power. We perform training and evaluation of the proposed model on three located benchmark plant disease datasets, PlantVillage dataset, the Rice Disease dataset. Experimental results show that our model achieves state-of-the-art classification accuracy of 99.39% on the PlantVillage dataset, 98.66% on the Rice Disease dataset. In contrast to the state-of-the-art deep learning models, our method obtains higher accuracy with fewer parameters so that it could be better suited for real-time applications on mobile and embedded systems. We explore an application of deep learning with the use of optimized architectures and present the viability of this technique in precision agriculture for faster and more accurate diagnosis of diseases in plants with lower computational load.

Read More

Doi: https://doi.org/10.54216/JISIoT.160218

Vol. 16 Issue. 2 PP. 246-256, (2025)

Effective Signal Transmission from Underwater to Air Utilizing Hybrid Communication Systems

Satea H. Alnajjar , Amjed Razzaq Alabbas , Mahmood J. Ahmad

Underwater optical communication (UOC) and off-surface areas wireless communications are a rapidly growing field, especially with the emergence of new technologies such as autonomous underwater vehicles and above/water drones. The challenge lies in the absence of a water surface platform to transfer the signal from underwater to off surface. This research investigates the design and implementation of a hybrid communication system that successfully transmits signals from underwater environments to above-water. The study utilizes OFDM as method to generate data on the integration of underwater optical wireless communication (UWOC) at 532nm and LOS optical channel. After adjusting the line of sight through the angle of refraction and overcoming the challenges of water and above water conditions as well as ambient lighting, ambitious results were obtained 100 meters above clear water and 40 meters in haze wither at a depth of 10 meters for transmission. The research has mitigated challenges and enhancing the effectiveness of underwater-to-air communication systems.

Read More

Doi: https://doi.org/10.54216/JISIoT.160219

Vol. 16 Issue. 2 PP. 257-270, (2025)

Improving the Reliability of Wireless Sensor Network Assisted IoT Network with a Cluster-Based Chain-Tree Routing Protocol

R. Lalitha , A. V. Senthil Kumar

The primary objective of designing routing protocols for Wireless Sensor Networks (WSNs) is to extend the network lifetime by optimizing the use of the limited battery energy of the sensor nodes. To improve conservation of energy and longevity of the network in WSNs, this study proposes a Cluster-based Chain-Tree Routing Protocol (CCTRP). Integrating tree based chain and cluster routing methods in WSNs is the primary objective of this study. This new CCTRP adopts a sector-based vertical network-partitioning scheme that divides network into sectors and it again vertically partitions the nodes too form various size of clusters. Then, Minimum Spanning Tree (MST) is created based on the kruskal’s Algorithm through a Chain Leader (CL) node serving as the receiver and chain is formed from CLs of last level cluster to Base Station (BS) in each sector. Using the BS, remaining energy and distance to the next CL node, CCTRP determines the Cluster Leader (CL) or Chain CL node in each cluster. For data transport, it also selects the shortest paths. When the energy that remains in the node is ready to be exhausted, the transition is executed according to this protocol. This results in a significant improvement of the average network lifespan. Finally, the CCTRP protocol outperforms the current protocols in terms of network performance, according to the simulation results.

Read More

Doi: https://doi.org/10.54216/JISIoT.160220

Vol. 16 Issue. 2 PP. 271-285, (2025)

Sensor-Based Spatio-Temporal Human Activity Recognition: A Systematic Review of Advancements, Challenges, and Future Directions

Asmaa Badran , Ahmad Salah , A. A. Soliman , Dina A. Elmanakhly , Ahmed Fathalla

Spatio-temporal human activity recognition (HAR) is an emerging field that uses spatial and temporal data to identify and classify human activities accurately. It has been effectively applied in areas like healthcare for monitoring daily activities, detecting anomalies, and aiding rehabilitation with real time feedback. However, there is a gap in research specifically focusing on integrating spatio temporal data with advanced machine and deep learning techniques for HAR based on sensor data. Existing reviews do not comprehensively cover spatio-temporal HAR based on sensor data, resulting in a lack of summaries on recent models, datasets, sensor technologies, applications, and machine/deep learning techniques used in this field. This systematic review provides a comprehendsive overview of spatio-temporal HAR based on sensor data, tracing its development from the origin of sensor-based spatio-temporal HAR field to the present. It highlights the main challenges in spatio- temporal HAR. The review also examines model trends over the years, including the distribution of models used in HAR and the identification of those frequently combined to form hybrid models. Additionally, it analyzes accuracy trends of the commonly used datasets and identifies the datasets that are widely used in spatio-temporal HAR research. Furthermore, various application domains and sensor technologies used in spatio-temporal HAR are identified.

Read More

Doi: https://doi.org/10.54216/JISIoT.160221

Vol. 16 Issue. 2 PP. 286-307, (2025)

AI and Machine Learning for Breast Cancer Diagnosis Using Histopathology and Clinical Decision Systems

Swati R. Nitnaware , Bindu Madhavi Tummala , Naga Siva Jyothi Kompalli , Lakshmi Ramani Burra , Nelli Sreevidya , Gunavardini V.

The diagnosis of breast cancer depends on histopathology for precise and trusted evaluation between malignant tumor cells and benign cells. The analysis demands significant time and creates additional room for human errors. A deep learning approach for computer-aided diagnosis (CAD) establishes techniques to enhance the classification performance in this study. The proposed methods utilize One-hot encoding with VGG-16 for feature extraction to achieve 98% accuracy with BreakHis data while DBN for feature learning reaches 98% accuracy on BreakHis and 96% on Kaggle. SSGAN addresses unannotated images effectively with up to 89% accuracy. Through its application, deep learning technology proves to enhance breast cancer identification while decreasing the workload on medical pathologists. One-hot encoding remains efficient for computations yet the DBN extraction method produces superior features. The SSGAN model increases labeling accuracy when it uses available labeled data and unlabeled data to lower annotation expenses. Deep learning technologies validate their ability to transform breast cancer histopathological diagnosis through precision-enhanced efficient examination methods especially with semi-supervised GAN systems.

Read More

Doi: https://doi.org/10.54216/JISIoT.160222

Vol. 16 Issue. 2 PP. 308-324, (2025)