Journal of Intelligent Systems and Internet of Things

Journal DOI

https://doi.org/10.54216/JISIoT

Submit Your Paper

2690-6791ISSN (Online) 2769-786XISSN (Print)

Capsule Networks for Rice Leaf Disease Classification

Eman Turki Mahdi , Wijdan Jaber AL-kubaisy , Maha Mahmood

Deep Learning is a high-performance machine learning approach that combines supervised machine learning and feature learning. It is built of a sophisticated models with numerous hidden layers and neurons to create advanced image processing models. DL has proven its effectiveness and resilient in different fields including big data, computer vision, image processing, and many others. In agriculture, rice leaf infections are a frequent and pervasive issue that lower crop and output. This research proposed a reduced form of Capsule Network (Caps NET), a form convolutional neural network, for the classification of rice leaf disease. The goal of the suggested Caps NET model was to assess the suitability of various feature learning models and enhance deep learning models' capacity to learn about rice leaf disease classification. Caps NET was fed images of both healthy and infected leaves. High classification performance was obtained with the ideal configuration (FC1 (960), FC2 (768), and FC3 (4096)), which had 96.66% accuracy, 97.25% sensitivity, and 97.49% specificity.

Read More

Doi: https://doi.org/10.54216/JISIoT.140201

Vol. 14 Issue. 2 PP. 01-07, (2025)

New Adaptive-Clustered Routing Protocol for Indoor Fire Emergencies Using Hybrid CNN-BiLSTM Model: Development and Validation

Ola Khudhair Abbas , Fairuz Abdullah , Nurul Asyikin Mohamed Radzi , Aymen Dawood Salman

This study presents a new adaptive routing protocol for fire emergencies, leveraging a newly created dataset and a hybrid deep learning approach to optimize decision-making and data routing strategies. The developed protocol integrates a hybrid of Convolutional Neural Networks (CNNs) with Bi-Directional Long Short-Term Memory (BiLSTMs) deep learning models to predict fires at early stages, effectively managing the dynamic and unpredictable nature of fire emergencies to prevent data loss and ensure packet delivery to the base station. Exhaustive validation was conducted utilizing the standard protocol to ensure the reliability and effectiveness of the proposed approach. Experimental results demonstrate the superior performance of the proposed hybrid-deep learning model and the significant enhancements in routing efficiency and monitored data preservation for the developed protocol compared to the standard protocol. The findings are useful in providing a reliable solution for adaptive routing during emergencies.

Read More

Doi: https://doi.org/10.54216/JISIoT.140202

Vol. 14 Issue. 2 PP. 08-24, (2025)

Intrusion Detection System in Wireless Sensor Networks Using Machine Learning

Zainab S. Idan , Ahmed Al-Fatlawi , Hussein Akeel Hussein Alaasam , Sajjad H. Hasan , Ahmed Ali Talib Al Khazaali

Current industrial control systems are increasingly integrating with corporate Internet technology networks in order to fully utilize the abundant resources available on the Internet. The growing connection between industrial control systems and the internet has made them a desirable choice. Industrial control systems are in need of significant protection due to being a common target for a range of cyber-attacks. The use of the Internet of Things is currently increasing across industries due to its efficiency, and the Internet of Things is facing a security challenge. This document gives an overview of the intrusion detection system and the methods of the intrusion detection system. The purpose of this document is to examine intrusion detection methods and present the best method based on studies. Experimental results show that this system uses a combination of machine learning methods for high performance.

Read More

Doi: https://doi.org/10.54216/JISIoT.140203

Vol. 14 Issue. 2 PP. 25-35, (2025)

Reduce Energy Consumption and Increase Lifetime via Genetic Algorithm over Wireless Communication Networks

Mohammed Arif Nadhom Obaid Al-agar , Zaynab Saeed Hameed , Israa Ali Al-Neami , Sergey Drominko , Erina Kovachiskaya

Wireless sensor networks have been identified as one of the most important technologies. A vast amount of research and development has been devoted to this area in the past decade. Nowadays, they have been applied in various fields including environment monitoring, smart building, medical care, and etc. With the advances in electronics, wireless communications, and sensor technology, more and more new opportunities have been created for the research in wireless sensor networks. However, the successful implementation of WSN faces many challenges, such as limited power, limited memory, and limited computing capability. Among them, limited power is the most critical restriction because it is usually impossible for the battery-powered sensor nodes to be recharged. Therefore, one of the main areas of interest for wireless sensor network research is how to reduce power consumption. The proposed system classifies sensor nodes into two operational modes, optimizes node deployment, and finds optimal node placements using a genetic algorithm (GA) to minimize the energy consumption of the WSN. The system's successful testing on a simulated WSN meant for radiation site identification revealed its potential for practical real-world applications.

Read More

Doi: https://doi.org/10.54216/JISIoT.140204

Vol. 14 Issue. 2 PP. 36-43, (2025)

Smart Home Cloud Monitoring Design and Investigation Using Artificial Intelligence Strategies

Hiba A.Tarish

Artificial intelligence (computer-based intelligence) is advancing significantly in all areas and applications of life at a high speed. The use of modern technologies has become a necessity in daily life, and smart systems have entered daily life, especially in the design of smart homes. Smart homes linked to man-made intelligence mimic the way residents live and facilitate many activities and services. Although some studies have shown how smart homes use computer-based intelligence, few applications have been reported for integrating smart technologies into installation and use of the Internet of Things. In this research, the basic problems in adaptive smart home systems, such as the development of the smart home and its synchronization with the Internet of Things, and “what is the relationship between analysis and adaptation in smart homes with simulation of intelligence algorithms” were addressed to be the focal point of this paper. Moreover, this study aims to depict the capabilities and elements of artificial intelligence in improving the performance of smart homes. In order to understand how to use artificial intelligence to build smart homes, the precise situation of applying artificial intelligence in smart home elements and the way applications are used in homes was determined. We simulated a multi-service smart home environment by designing an efficient, multi-purpose artificial intelligence algorithm to improve the control level and enhance the performance of smart home services.

Read More

Doi: https://doi.org/10.54216/JISIoT.140205

Vol. 14 Issue. 2 PP. 44-61, (2025)

Security Inspection for Data Computing Networks Using Deep Learning Techniques

Alaa Q. Raheema

Deep learning offers practical answers for neural network models when applied to cloud registering security. Via robotization Distinguish dangers, decrease manual checking, and further develop in general security adequacy. Deep learning network models assume a pivotal part in security errands like interruption discovery, malware identification, anomaly recognition, and log examination. requires Deep Learning mix in cloud security cautiously assesses existing frameworks, characterizes goals, chooses dataset with arrangement, model tuning and last changes for consistence. Moreover, applying deep learning methods in cloud security requires thought of variables, for example, computational assets, information assortment, arrangement costs, model turn of events, mix endeavors, and continuous observing and support. This study proposes an artificial neural network (ANN) model portrayal in the cloud to track down cloud security parts and recreate security techniques and researches the essential moves toward coordinate these models in the cloud. Regarding that the adequacy of the ANN scheme relies upon cloud parameters like the nature of the preparation information and the network architecture Also, weight change calculations. The review emploies a dataset from Kaggle.com to approve the recreation and blueprints the means Partake in preparing and assessment of the ANN structure.

Read More

Doi: https://doi.org/10.54216/JISIoT.140206

Vol. 14 Issue. 2 PP. 62-77, (2025)

Prediction and Classification of Fatty Liver Disease Using Probabilistic Neural Networks

Appanaboyina Sindhuja , Seetharam Khetavath

Fatty liver disease, encompassing conditions like NAFLD (Non-Alcoholic Fatty Liver Disease) and NASH (Non-Alcoholic Steatohepatitis), is a significant global health issue linked to metabolic syndrome and increasing incidences of liver-related complications. Accurate and early detection of fatty liver illness is critical for effective intervention and management. This paper proposes a novel method for the prediction and arrangement of fatty liver disease using Probabilistic Neural Networks (PNNs), leveraging advanced machine learning techniques to enhance diagnostic accuracy and reliability. We developed a PNN-based model to classify liver conditions from a dataset comprising clinical and imaging features, including liver fat content, texture metrics, and demographic information. The PNN was chosen for its capability to handle complex, high-dimensional data and provide probabilistic outputs, which are crucial for assessing the likelihood of different disease stages and improving interpretability. The proposed methodology includes preprocessing steps to normalize and augment the data, followed by feature extraction using advanced techniques to capture relevant patterns. The PNN architecture was designed with multiple layers to process features and deliver class probabilities. The method's concert was estimated utilizing average system of measurement such as accuracy, precision, recall, and F1-score, demonstrating its efficacy in distinguishing between different stages of fatty liver disease. Experimental results indicate that the PNN model achieves high classification accuracy and outperforms traditional machine learning methods in detecting fatty liver illness. This study highlights the potential of PNNs in enhancing diagnostic processes and providing a robust tool for clinicians. Future work will concentrate on expanding the dataset, refining the model, and integrating it into clinical workflows to support better patient outcomes in liver disease management

Read More

Doi: https://doi.org/10.54216/JISIoT.140207

Vol. 14 Issue. 2 PP. 78-90, ()

Automated Detection and Classification of Pneumonia using Deep Learning and Convolutional Neural Networks

Gurijala Anita , Sunil Singarapu

Lung disease is considerable deprivation from health standpoint. These include chronic obstructive pulmonary illnesses, asthma, lung fibrosis, lung parenchyma illnesses, and tuberculosis among others. It is highly critical in the early phase of lung illnesses when they are the most treatable. Many of these were made for the purpose of applying machine learning and image processing. Many types of DL methods including CNN, VNN, VGG networks, capsule networks are used during lung illness prediction process. Following the release of the book on Pandemic Covid-19, many projects have been carried out at international level intending to study the feasibility of such work for prediction of future events. Pneumonia is a lung infection that starts earlier in the disease course and is closely associated with the virus (pneumonia condition), which was responsible for considerable chest infection in some covid-positive individuals. While doctors are no strangers to lung diseases and their complicated nature, many will find it difficult in some of them to make distinctions between common pneumonia and the Covid-19. X-ray imaging of the chest provides the highest degree of accuracy in suffem lung diseases. In this work, a novel approach for the calculation of lung illnesses such as pneumonia and COVID-19 is proposed. The data source for this method is Chest X-ray pictures taken from patients. The system includes characteristics such as the extraction of features, the prediction of illnesses, and the precise and adaptive evaluation of ROI, the collecting of datasets, and the enhancement of image quality. In future, this research can be extended with IOT devices for the recognition of COVID-19 and pneumonia.

Read More

Doi: https://doi.org/10.54216/JISIoT.140208

Vol. 14 Issue. 2 PP. 91-102, (2025)

Research on Big Data efficient hybrid cloud storage model and algorithm based on 5G network

Lei Hu , Yangxia Shu

Due to the large demand for big data storage capacity, the storage intensity index is not calculated in the current big data cloud storage process, resulting in a high storage space usage. This paper proposes a big data efficient hybrid cloud storage model and algorithm under 5G network. The model is based on the 5G network performance framework and consists of three parts: users, private cloud and public cloud storage service providers. The purpose of efficient hybrid cloud storage of big data is achieved by using consistent hash algorithm. The simulation results show that the above algorithm occupies less storage memory space, the device load variance is smaller, the overall system load is more stable and balanced, and the average response is fast, which provides a favorable basis for the efficient hybrid cloud storage algorithm of big data.

Read More

Doi: https://doi.org/10.54216/JISIoT.140209

Vol. 14 Issue. 2 PP. 103-114, (2025)

Social Media Platform Based Evaluation of Teaching Style on Online Education System using Heuristic Search with Stacked Sparse Autoencoder Model

Walaa Fouda , Sanjar Mirzaliev , Reneh Abokhoza

As online education has become increasingly prominent, the primary objective of this study is to evaluate students' opinions of online classes taught by teachers with no prior experience in online teaching, focusing on their teaching style, teaching efficiency, and pedagogy in the online classroom. Online teaching is a kind of teaching system that depends on network management technology. It concludes the teaching method by the process of live courses or recorded courses employing software containing special online teaching environments and any APP software employed for teaching. Social media, with its massive pool of user-generated content and instant feedback, offers a great opportunity to calculate teaching styles in online class management. Therefore, this study offers a Social Media Based Evaluation of Teaching Style in Online Education Systems using Heuristic Search (SMBETS-OESHS) Algorithm. The main objective of the SMBETS-OESHS technique for evaluate teaching styles in online education systems using insights derived from social media platforms. At primary stage, the SMBETS-OESHS model takes place linear scaling normalization (LSN) is implemented for scaling the input data. Next, the bayesian optimization algorithm (BOA) based feature selection process can be employed to allow for the detection of the most relevant features from the data. In addition, the SMBETS-OESHS model exploits stacked sparse autoencoder (SSAE) technique for classification process. In order to achieve optimal performance, the SSAE model parameters are fine-tuned using the improved beetle optimization algorithm (IBOA), ensuring robust evaluation accuracy. The experimental validation outcome of the SMBETS-OESHS algorithm undergoes and the performances are examined over various measures. The simulation outcome stated that the enhanced solution of the SMBETS-OESHS system over the recent techniques.

Read More

Doi: https://doi.org/10.54216/JISIoT.140210

Vol. 14 Issue. 2 PP. 115-126, (2025)

Artificial Intelligence based Automated Sign Gesture Recognition Solutions for Visually Challenged People

Khalid Hamed Allehaibi

Gesture recognition is employed in human-machine communications, enhancing human life with impairments or who depend on non-verbal instructions. Hand gestures role an important role in the field of assistive technology for persons with visual impairments, whereas an optimum user communication design is of major importance. Many authors with substantial development for gesture recognition modeled several methods by using deep learning (DL) methods. This article introduces a Robust Gesture Sign Language Recognition Using Chicken Earthworm Optimization with Deep Learning (RSLR-CEWODL) approach. The projected RSLR-CEWODL algorithm majorly focuses on the recognition and classification of sign language. To accomplish this, the presented RSLR-CEWODL technique utilizes a residual network (ResNet-101) model for feature extraction. For optimal hyper parameter tuning process, the presented RSLR-CEWODL algorithm exploits the CEWO algorithm. Besides, the RSLR-CEWODL technique uses a whale optimization algorithm (WOA) with deep belief network (DBN) method for the sign language recognition method. The simulation result of the RSLR-CEWODL algorithm is tested using sign language datasets and the outcome was measured under various measures. The simulation values demonstrated the enhancements of the RSLR-CEWODL technique over other methodologies.

Read More

Doi: https://doi.org/10.54216/JISIoT.140211

Vol. 14 Issue. 2 PP. 127-139, (2025)

Enhancing Convolutional Neural Network for Image Retrieval

Zena M. Saadi , Ahmed T. Sadiq , Omar Z. Akif , El-Sayed M. El-kenawy

With the continuous progress of image retrieval technology, the speed of searching for the required image from a large amount of image data has become an important issue. Convolutional neural networks (CNNs) have been used in image retrieval. However, many image retrieval systems based on CNNs have poor ability to express image features. Content-based Image Retrieval (CBIR) is a method of finding desired images from image databases. However, CBIR suffers from lower accuracy in retrieving images from large-scale image databases. In this paper, the proposed system is an improvement of the convolutional neural network for greater accuracy and a machine learning tool that can be used for automatic image retrieval. It includes two phases; the first phase (offline processing) consist of two stages; stage1 for CNN model classification while stage 2 for extracts high-level features directly from CNN by a flattening layer, which will be stored into a vector. In the second phase (online processing), the retrieval depends on query by image (QBI) from the system, which relies on the online CNN model stage to extract the features of the transmitted image. Afterward, an evaluation is conducted between the extracted features and the features that were previously stored by employing the Hamming distance to return all similar images. Last, it retrieves all the images and sends them to the system. Classification for images was achieved with 97.94% deep learning results, while for retrieved images, the deep learning was 98.94%. For this paper, work done on COREL image dataset. The images in the dataset used for training are more difficult than image classification due to the need for more computational resources. In the experimental part, training images using CNN achieved high accuracy, proving that the model has high accuracy in image retrieval.

Read More

Doi: https://doi.org/10.54216/JISIoT.140212

Vol. 14 Issue. 2 PP. 140-152, (2025)

IoT Innovations for Transforming the Future of Tourism Industry: Towards Smart Tourism Systems

Olim Astanakulov , Muhammad Eid BALBA , Khayitov Khushvakt , Sokhibova Muslimakhon

The Internet of Things (IoT) has significantly transformed the tourism industry, reshaping travel design, supply, and experiences. This paper reviews the key developments in tourism IoT from the mid-2010s, highlighting technological, economic, and socio-cultural impacts. It explores the adoption of IoT technologies –such as smart wearables, intelligent transportation systems, and augmented reality –across tourism sectors, emphasizing their effects on tourist behaviour and sustainable tourism development. A mixed-method approach, including literature reviews and expert interviews, is used to analyse these trends. Findings reveal that IoT enhances personalization, immersion, and sustainability in travel experiences, though privacy, security, and ethical issues pose challenges. Strategic planning and collaboration are necessary to leverage IoT innovations for sustainable tourism growth.

Read More

Doi: https://doi.org/10.54216/JISIoT.140213

Vol. 14 Issue. 2 PP. 153-164, (2025)

Forecasting for Vaccinated COVID-19 Cases using Supervised Machine Learning in Healthcare Sector

Ali Khraisat , Mohd Khanapi Abd Ghani

Machine learning (ML)-based forecasting techniques have demonstrated significant value in predicting postoperative outcomes, aiding in improved decision-making for future tasks. ML algorithms have already been applied in various fields where identifying and ranking risk variables are essential. To address forecasting challenges, a wide range of predictive techniques is commonly employed. Research indicates that ML-based models can accurately predict the impact of COVID-19 on Jordan's healthcare system, a concern now recognized as a potential global health threat. Specifically, to determine COVID-19 risk classifications, this study utilized three widely adopted forecasting models: support vector machine (SVM), least absolute shrinkage and selection operator (LASSO), and linear regression (LR). The findings reveal that applying these techniques in the current COVID-19 outbreak scenario is a viable approach. Results indicate that LR outperforms all other models tested in accurately forecasting death rates, recovery rates, and newly reported cases, with LASSO following closely. However, based on the available data, SVM exhibits lower performance across all predictive scenarios.

Read More

Doi: https://doi.org/10.54216/JISIoT.140214

Vol. 14 Issue. 2 PP. 165-177, (2025)

MODRS: A Multi-Objective Deep Learning Algorithm for Optimizing Routing and Scheduling in LEO Satellite Networks

Ali Jaber Almalki

The demand for high-quality Direct-to-Home (D2H) television broadcasting services delivered via Low Earth Orbit (LEO) satellite constellations has surged in recent years. To address the growing needs of viewers, satellite communication must optimize the scheduling and routing of signals while balancing conflicting objectives. This research presents a novel approach named as Multi-Objective Deep Routing and Scheduling (MODRS) algorithm that is designed to tackle the challenges of signal latency minimization, bandwidth utilization maximization, and viewer demand satisfaction. The Multi-Objective Deep Neural Network (MODNN) is implemented in this paper to make intelligent routing and scheduling decisions for balancing multiple objectives. To enhance the learning process and provide training stability, the experience replay is used and the epsilon-greedy strategy is included to balance exploitation and exploration strategies. The Pareto-front concept is used for efficient D2H television broadcasting in the LEO satellite constellation. The experimental validation is conducted based on low-latency broadcasting, high-bandwidth utilization, viewer demand flexibility, adaptive signal strength and resource allocation efficiency. Using a series of simulated scenarios, this paper explores the versatility and robustness of MODRS, showcasing its exceptional performance in real-time, resource-efficient, disaster recovery, and rural broadcasting contexts. The findings indicate that MODRS is well-suited for a wide range of real-world applications, from low-latency broadcasting and disaster recovery to cost-effective rural expansion, enhancing the quality and accessibility of D2H television services. The MODRS algorithm emerges as a transformative solution for satellite communication optimization, ensuring viewer satisfaction and operational efficiency.

Read More

Doi: https://doi.org/10.54216/JISIoT.140216

Vol. 14 Issue. 2 PP. 189-212, (2025)

Intelligent Crop Disease Detection and Classification Using Deep Convolution Neural Network with Honey Badger Algorithm on Image Data

Daniel Arockiam , Azween Abdullah , Valliappan Raju

Cotton is the most significant cash crop in India. Each year cotton production is decreasing because of the attack of the disease. Plant diseases are usually produced by pathogens and pest insects and reduce the yield to a large scale if not controlled in time. The hour requires an effective plant disease diagnosis system that can assist the farmers in their farming and cultivation. Nevertheless, cotton production is harmfully affected by the presence of viruses, pests, bacterial pathogens, and so on. For the past decade or so, numerous image processing or deep learning (DL)--based automated plant leaf disease recognition techniques have been established but, unluckily, they infrequently focus on the cotton leaf diseases. Therefore, this article develops an Intelligent Detection and Classification of Cotton Leaf Diseases Using Transfer Learning and the Honey Badger Algorithm (IDCCLD-TLHBA) model with Satellite Images. The proposed IDCCLD-TLHBA technique intends to determine and classify various kinds of cotton leaf diseases using satellite imagery. In the IDCCLD-TLHBA technique, the wiener filtering (WF) model is used to reduce noise and enhance image quality for subsequent analysis. For feature extraction, the IDCCLD-TLHBA technique applies the MobileNetV2 model to capture relevant features from the satellite images while maintaining computational efficiency. In addition, the stacked long short-term memory (SLSTM) method is employed for the classification and recognition of cotton leaf diseases. Eventually, the honey badger algorithm (HBA) is used to optimally select the parameters involved in the SLSTM model to ensure a better configuration of the network to enhance results. The performance validation of the IDCCLD-TLHBA method is carried out against the benchmark dataset and the stimulated results highlight the better results of the IDCCLD-TLHBA model across the existing techniques.

Read More

Doi: https://doi.org/10.54216/JISIoT.140215

Vol. 14 Issue. 2 PP. 178-188, (2025)

Classification of Tomato Diseases Using Deep Learning Method

Adnan M. A. Shakarji , Adem Gölcük

With an average annual intake of almost 20 kilograms per person, tomatoes are the most consumed vegetable worldwide. Diseases brought on by dangerous organisms are among the most important factors adversely affecting tomato production's output and quality. Depending on the climate and environmental conditions, tomatoes can be afflicted by a variety of illnesses throughout the planting and growing phases. It is essential for tomato growers to identify possible infections and take the appropriate preventative measures. Applications of artificial intelligence have grown in popularity recently. AI is being used in agriculture to identify plant illnesses. This research uses deep learning, a branch of artificial intelligence, to categories common tomato diseases. In the beginning, samples of frequently seen tomato illnesses were gathered from tomato growers in Kirkuk. Once there were enough data, the system developed with image processing algorithms produced meaningful images. Using a CNN-based GoogleNet deep learning system, the resulting dataset was trained and diseases were classified. The results show that the deep learning system that was constructed has a high degree of success and dependability when it comes to tomato disease classification.

Read More

Doi: https://doi.org/10.54216/JISIoT.140217

Vol. 14 Issue. 2 PP. 213-228, (2025)