Several diseases have been identified as fatal conditions affecting individuals during their middle and old ages worldwide. In recent years, chronic and pulmonary diseases have exhibited the highest mortality rates among all known conditions in this category. Machine Learning (ML) tools have efficiently studied the causes of these harmful diseases, including analyzing large databases. These databases may contain unreliable and redundant features that affect prediction accuracy and speed. Applying, the feature-based extraction and selection methods to remove inconsistent components is essential. This article implements a deep neural network (DNN) technique for diagnosis to classify different diseases. However, the DNN model faces a challenge, specifically hallucination, in accurately classifying diseases. To overcome this, a hybrid optimization DNN model has been introduced. This model is useful for recommending treatments based on the diagnosed diseases. The hybrid optimization DNN model, referred to as the Intelligent Healthcare Recommendation (IHCR) model, is designed to predict and recommend treatments for chronic and pulmonary disease patients. The research model effectively extracts features at a specific level and selects valuable features to provide accurate recommendations. This recommendation phase is followed by a statistical analysis based on probability, which evaluates patients' risk levels. Reliant on the data from the risk analysis (RA), patients are given recommendations regarding the severity and performance of the related diseases for early treatment. The proposed work has been estimated using dissimilar databases based on muti-diseases, and the outcomes appear encouraging. This research aims to develop the IHCR model for chronic and pulmonary diseases. The performance of the implemented recommendation models is evaluated using parameters like RMSE, specificity (SP), sensitivity (SN), and accuracy (Acc). The results of the recommendation model show an Acc of 96.81–97%.
Read MoreDoi: https://doi.org/10.54216/JISIoT.160121
Vol. 16 Issue. 1 PP. 245-262, (2025)
Through its integration with the Federated Learning (FL) and Digital Twin (DT) technology, Internet of Things (IoT) based smart livestock farming is revolutionized toward real-time health monitoring and predictive analytics combined with secure decision-making. Privacy risks, inefficient models, large computational overheads, and heterogeneous data remain prominent in existing frameworks. This work introduces a “Privacy-Enhanced Digital Twin Livestock Optimization (PEDLO)” system, combining several adaptive and AI-driven components, including IntelliSense-Livestock Monitoring Framework (ISLMF) for multi-sensor data fusion, Privacy-Preserving Hybrid Aggregation (PPHA) Algorithm for secure federated learning, and Digital Twin-Augmented Reinforcement Learning (DTARL) for simulation-based decision-making. The PEDLO system optimizes disease prediction and anomaly detection, aims to reduce false alarms, and ensures data privacy for enhanced livestock welfare. Experimental results show 0.94 of accuracy, 0.93 of anomaly detection sensitivity, and a 40-second model convergence time, which outperform state-of-the-art techniques by a wide margin. The proposed system will enable scalable, efficient, and secure livestock management, marking a transformative shift toward sustainable precision farming.
Read MoreDoi: https://doi.org/10.54216/JISIoT.160101
Vol. 16 Issue. 1 PP. 01-18, (2025)
Diabetes presents significant health risks globally, necessitating precise blood glucose monitoring to prevent serious repercussions including blindness, renal illness, kidney failure, heart disease, and even death from hyperglycemia or hypoglycemia, it is imperative to maintain normal blood glucose levels. However, regular blood glucose monitoring can be difficult for diabetics, and current non-invasive techniques sometimes do not assess blood sugar levels accurately or directly. In order to solve this problem, this study suggests a wearable optical system that is affordable and low-complexity. In this study, a wearable optical system has been proposed which can address the challenges in the accuracy and convenience in existing methods. This system used an Arduino Nano as a central control unit and a laser-transmitted module for blood glucose measurement. Light Dependent Resistors (LDRs) is used to detect and measure the intensity of laser light passing through the skin and impressed by blood glucose levels. The results are displayed on Organic Light Emitting Diode (OLED). During one weak trial, the system achieved average error present of 7.6% and 3.9% for before and after meal blood glucose concentration. The aim of this study is to enhance the lifestyle of diabetic patients by providing user-friendly technology for convenient blood glucose monitoring. It focuses on the potential benefits of non-invasive approaches and concentrates on the importance of the proposed wearable optical system in improving healthcare outcomes.
Read MoreDoi: https://doi.org/10.54216/JISIoT.160102
Vol. 16 Issue. 1 PP. 19-27, (2025)
Pedestrian detection using object detection and deep learning has been found to be effective method for identifying pedestrians in video frames or images accurately. It is more commonly used in many real-time applications, such as security observing systems, autonomous driving systems, and robotics. The combination of deep learning techniques and object detection algorithms allows efficient and robust detection of pedestrians in several real-time scenarios. However, it is necessary to improve the detection efficacy for complex environments such as cases with worse visibility due to weather or daytime, crowd scenes, and rare pose samples. Continuous improvement and research in DL algorithms, dataset collection, and TRA models contribute to accelerating the robustness and acc of pedestrian detection systems. Therefore, this research models a novel marine predator algorithm with DL-based pedestrian detection and classification (MPADLB-PDC) method. The objective of the MPADLB-PDC system lies in the accurate recognition and identification of pedestrians. To achieve this, the MPADLB-PDC technique involves two major processes, namely object detection and classification. In the first stage, the MPADLB-PDC technique uses an improved YOLOv7 object detector for the recognition of the objects in the frame. Next, in the second stage, the ensemble classifier comprises three classifiers such as deep feed-forward neural networks (DFFNNs), extreme learning machine (ELM), and long short-term memory (LSTM). To improve the recognition performance of the ensemble classifier, the MPA is used to optimally select the parameters related to it. The simulation outcome of the MPADLB-PDC technique was authorized on the pedestrian database, and the outcome can be studied in terms of various aspects. The experimentation values validated the better outcome of the MPADLB-PDC approach compared to other approaches.
Read MoreDoi: https://doi.org/10.54216/JISIoT.160103
Vol. 16 Issue. 1 PP. 28-40, (2025)
Selecting the most relevant feature subset for a task is demanded and recommended for high accuracy and reduced model training time. Ensemble learning has shown superior results in classification; hence, we propose an ensemble method for feature selection and shown stability analysis for the selected feature set. The research question being investigated is whether ensemble methods are effective at selecting informative features in a dataset and if the selected features are stable compared to other feature selection methods. This paper presented a tree-based ensemble learning approach for feature selection. Our approach for ensemble feature selection includes function perturbation with the voting ensemble, an ensemble with a fixed number of features, and an ensemble with a contiguous number of features. Ensemble learning is found to be superior to other traditional feature selection algorithms. Ensemble learning algorithms are implemented on two high-dimensional microarray biomedical datasets. From our experimental study, it is observed that the voting ensemble outperforms other ensemble techniques, thereby reducing feature subset size and achieving higher accuracy. Stability analysis of all the algorithms has been studied and it is found that all ensemble techniques have higher stability than the traditional feature selection methods. Thus, ensemble learning proves to be a superior technique for feature selection. Our results demonstrate that the proposed method is effective in identifying relevant features and stable features and can improve the performance of machine learning models.
Read MoreDoi: https://doi.org/10.54216/JISIoT.160104
Vol. 16 Issue. 1 PP. 41-48, (2025)
The following study investigates the role and impact of IoT and Al technologies on operational efficiency, sustainability, and cost optimization of freight forwarding companies. Their goals are to measure the effects of these technologies on logistics performance, assess sustainability improvements like decreased carbon emissions and waste, and identify cost-saving drivers for AI and IoT integration. H1: The operational efficiency of IoT and AI should enhance information sharing, route planning, and warehouse management significantly H2 claims that it will contribute to the reduction of carbon emissions and waste production by allowing real-time tracking, optimizing the usage of materials throughout the production cycle. H3- Cost Reduction in Logistics Operations through AI-based Automation, Predictive analytics and Improved Asset Management The approach was a quantitative research design, and data were obtained from 240 respondents from five large freight forwarders (companies): DHL Global Forwarding; Kuehne + Nagel; DB Schenker; XPO Logistics; and CEVA Logistics. Objective: Improvements after adoption are analyzed using structured questionnaires to measure key performance indicators (KPI) and frequency analysis and percentage calculation methods. The results confirm the transformative role of IoT and AI in freight logistics, increasing operational efficiency, sustainability, and cost efficiency. Logistics performance must be further optimized through continued investment in digital innovation.
Read MoreDoi: https://doi.org/10.54216/JISIoT.160105
Vol. 16 Issue. 1 PP. 49-60, ()
Turmeric is a rhizomatous crop recognized for its medicinal effects which requires significant observation to ensure appropriate growth and progression. Turmeric plant diseases cause yield losses impacting food production systems and causing economic losses. Early prevention of these diseases is crucial for improving agricultural productivity. For this reason, The Improved YOLOV3-Tiny Model (IY3TM) was developed using Cycle-GAN and Convolutional Neural Network (CNN) with residual network for the early turmeric plant disease detection. However, this model leads to the omission of vital details along with the exact positioning of key attributes, thereby decreasing prediction accuracy. To resolve this, Convolutional and Vision Transformer model for Turmeric Diseases Detection (ConViT-TDD) is proposed for the prediction of turmeric plant diseases. ConViT-TDD is integrated into IY3TM with a self-attention mechanism and CNN-based global perspective to enhance the performance of the model A ConViT-TDD block involves the input channel transformation, the channel as well as spatial attention mechanism and global-minded transformers. The input channel transformation utilizes a convolutional layer to minimize the dimension of input channel and reduces the computational complexity. Global-minded transformers generate a feature vector based on the input channel transformation that is then transmitted to the encoder component. By collecting channel weights and spatial weights, respectively, the channel and spatial attention modules enhance the model's sensitivity to certain channel attributes and spatial locations, hence altering the feature representation of those channels and spatial locations. The attention module can adaptively change the weights of channel and spatial features for improved feature extraction and fusion. Once the initial attributes are reformed, the IY3TM detects and classifies the turmeric plant diseases. The test outcomes reveal that the ConViT-TDD model accomplishes an overall accuracy of 93.16% on the collected turmeric plant diseases images which is contrasted with the classical CNN models.
Read MoreDoi: https://doi.org/10.54216/JISIoT.160106
Vol. 16 Issue. 1 PP. 61-74, (2025)
Adverse Drug Reactions (ADRs) are very hazardous to patients. Thus, the detection of ADR intends to automatically distinguish, which is an intensive study for public health monitoring functions. Detecting ADRs is the most significant information to determine the patient’s opinion on some drugs. As patients can experience projected and occasionally unpredicted negative results from taking some drugs, late detection of ADRs may place life-threatening dangers to patients; posing significant financial, social, and legal consequences to the regulatory agencies and manufacturing companies. The usage of medical data, like states and electronic health records (EHR), became normal in offering a richer understanding of health services and assisting ADR analysis. Developments in deep learning (DL) and machine learning (ML) have made several analytic models have the potential to apply higher-dimensional data to predict adverse effects. In this study, we present a Hippopotamus Optimizer-Based Feature Selection for Adverse Drug Reaction Detection Using a Variational Autoencoder (HOFS-ADRDVAE) model. The main intention of the HOFS-ADRDVAE model is to provide an automatic system for the detection of ADR using state-of-the-art techniques. Initially, the data normalization stage employs min-max normalization for converting input data into a beneficial format. In addition, the feature selection process has been executed by the hippopotamus optimization (HO) algorithm. Besides, the proposed HOFS-ADRDVAE model designs a variational autoencoder (VAE) technique for the classification procedure. At last, the Hunger Games search (HGS) algorithm-based hyperparameter selection process is executed to optimize the classification results of the VAE system. A wide-ranging experiment was implemented to point out the performance of the HOFS-ADRDVAE method. The experimental outcomes specified that the HOFS-ADRDVAE model emphasized improvement over another existing method.
Read MoreDoi: https://doi.org/10.54216/JISIoT.160107
Vol. 16 Issue. 1 PP. 75-85, (2025)
Feature selection (FS) is a crucial preprocessing step in data mining to eliminate redundant or irrelevant features from high-dimensional data. Many optimization algorithms for FS often lack balance in their search processes. This paper proposes a hybrid algorithm, the Artificial Hummingbird Algorithm based on the Genetic Algorithm (AHA-GA), to address this imbalance and solve the FS problem. The main goal of AHA-GA is to select the most crucial characteristics to improve overall model categorization. The UCI datasets are used to assess the performance of the proposed FS method. The proposed feature selection algorithm was compared with five feature selection optimization algorithms: BWOAHHO, HSGW, WOA-CM, BDA-SA, and ASGW. AHA-GA achieved a classification accuracy of 96% across 18 datasets, which was higher than BWOAHHO (93.2%), HSGW (92.5%), WOA-CM (94.4%), BDA-SA (93%), and ASGW (91.6%). When comparing the proposed AHA-GA algorithm to the results obtained by the other five algorithms in terms of selected attribute size, the average feature sizes were as follows: AHA-GA (15.10889), BWOAHHO (16.74222), HSGW (19.43111), WOA-CM (17.05389), BDA-SA (17.275), and ASGW (19.7585). The statistical and experimental tests demonstrated that the proposed AHA-GA performs better than competitive algorithms in selecting effective features.
Read MoreDoi: https://doi.org/10.54216/JISIoT.160108
Vol. 16 Issue. 1 PP. 86-101, (2025)
Vehicular Ad Hoc Networks also known as VANET, and it is a special type of ad hoc networks since it is deployed on demand. Here the nodes are representing as vehicles, and they are communicating with each other to ensure the reliable and secure safety driving. Since it is open environment, ensuring secure routing is always a challenging task. Routing is one of the essential things in ad hoc networks because it is carrying road safety information always. However, most of the time, it is affected by attacks. Black hole is one of the attacks where the malicious nodes that is black hole vehicles advertise itself that having the shortest path to the destination by the way it tries to disturb the entire environment. In this paper, multi-dimensional trust-based data dissemination mechanism is proposed. The main objective is to ensure authentication by eliminating black hole attack. The proposed method makes use of multiple trusts such as direct, indirect, integrity, intimacy, and mobility over Dynamic Source Routing (DSR) protocol by the way authentication can be achieved. Simulation results shows that the proposed model works efficiently compare with existing models.
Read MoreDoi: https://doi.org/10.54216/JISIoT.160109
Vol. 16 Issue. 1 PP. 102-117, (2025)
The Internet of Things (IoT) aims to provide connectivity between all computing entities. However, this facilitates cyberthreats, which exploits the existence of vulnerability over a period. The zero-day threat is one of the vulnerabilities that can result in zero-day attacks that are destructive to the network security and an enterprise. This attack may have potentially compromised critical infrastructure, far-reaching consequences, national security, and even personal privacy. To alleviate the risks, organizations and manufacturers should prioritize proactive security measures, involving robust authentication mechanisms, ongoing monitoring, and timely software updates, to defend the IoT ecosystem from emerging threats. In present scenario, deep learning (DL)-based models have improved robustness in learning data giving it an improved capability to identify unknown information, since it can able to extract knowledge of non-linear data to identify unknown information. The study presents a Robust Zero-Day Attack Detection with Optimal Deep Learning (RZDAD-ODL) technique for the IoT framework. The primary intention of the RZDAD-ODL model lies in the automatic and effectual detection of zero-day attacks in the IoT framework. In the presented RZDAD-ODL technique, the honey badger algorithm (HBA) can be used for the optimum range of the features. Besides, the RZDAD-ODL technique exploits the conditional variational autoencoder (CVAE) model for attack detection and its parameter tuning process can be performed by using a rider optimization algorithm (ROA). The experimentation results of the RZDAD-ODL system can be validated on a benchmark dataset. Extensive comparison studies reported the better attack detection performance of the RZDAD-ODL model over other current techniques.
Read MoreDoi: https://doi.org/10.54216/JISIoT.160110
Vol. 16 Issue. 1 PP. 118-131, (2025)
Car crowd management refers to the process of efficiently and safely managing the movement and flow of cars in crowded areas, such as parking lots, traffic intersections, event venues, and busy streets. Effective car crowd management is essential to ensure smooth traffic flow, prevent accidents, reduce congestion, and optimize the utilization of available parking spaces. It is a critical aspect of urban planning and traffic management to enhance the overall transportation experience and safety for both drivers and pedestrians. Deep learning methods are used to create an artificial system that is shown in this study. Proposed in detecting cars in streets and traffic intersections, in addition to determining the quantity of cars based on the YOLOv8 algorithm. Where the proposed system was trained on three types of datasets for the purpose of testing the algorithm used to determine the number of cars in each direction of the traffic intersection and then give priority to the most crowded direction with cars and then less and less. Where the system reached a high accuracy in detecting cars, reaching 98%, and through it conclude that the YOLOv8 algorithm used was suitable to be employed in solving the problem of determining the priority of traffic by identifying places of congestion with high accuracy.
Read MoreDoi: https://doi.org/10.54216/JISIoT.160111
Vol. 16 Issue. 1 PP. 132-141, (2025)
Vehicular ad hoc network (VANET) is an innovative technology that has attracted many researchers and the industrial sector. The increase in vehicle movement and the requirement for effective traffic management systems have resulted in the development of VANETs. The Super Cluster Head based Efficient Traffic Control (SCHETF) model aims to alleviate traffic congestion and decrease energy consumption in VANETs through a novel integration of Cluster Head (CH) election, cluster gateway formation, and effective data transmission. SCHETF utilizes a parameter-driven CH election process that considers factors such as network connectivity, distance, speed, and trust levels. This approach guarantees the most suitable CH selection, reducing energy expenditure while enhancing network efficiency. The model assesses network connectivity through indicators like traffic flow and lane weights, ensuring precise determination of link reliability. Metrics for distance and speed are normalized to evaluate the changing behavior of vehicles, while trust ratings are given based on historical and community information to improve reliability. The creation of cluster gateways reduces unnecessary cluster formations by implementing Cluster Gateway Creation (CGC) at strategic sites, lessening communication load, and boosting cluster stability. Efficient data transmission is accomplished by appointing several Cluster Gateway (CGW) within clusters. A backoff timer mechanism gives priority to the CGW that is farthest from the CH for message forwarding, avoiding unnecessary repetitions and guaranteeing effective message dispatch. The model is smart clustering and gateway strategies lessen signaling load during handovers and enhance resource management in dynamic vehicular settings. The SCHETF model offers a thorough framework for tackling the challenges faced by VANETs, providing scalable and energy-efficient communication options. This improves data distribution, assures dependable connectivity, and plays a crucial role in the progress of intelligent transportation systems. The model has been put into practice through experimentation in Network Simulator 2 (NS2). The parameters considered in this study encompass energy efficiency, throughput, packet delivery ratio, end-to-end delay, packet loss, and routing overhead. To undertake a comparison study, the developed SCHETF findings are compared to older approaches such as Evolutionary Algorithm-based Vehicular Clustering Technique (EAVCT), Region Collaborative Management for Dynamic Clustering (RCMDC), and Novel Hypergraph Clustering Model (NHGCM). The outcomes indicate that the suggested SCHETF strategy outperforms previous methods.
Read MoreDoi: https://doi.org/10.54216/JISIoT.160112
Vol. 16 Issue. 1 PP. 142-151, (2025)
The fast growth of artificial intelligence technologies, especially language processing technology has obscured the lines in between human-generated text comparing to chatbot-generated message. Recognizing which generated such, a text is essential for applications like information generating and manipulated text in order to guarantee authenticity between communicated parties. This research applies to a set of machine learning models to identify text as either human-written or chatbot-generated. The methodology of this research starts with a dataset including text generated from different Large Language Models (LLMs) along with a text generated by a human. After that, Tf-Idf ranking vectorization was used to define word embedding has and represent the text numerically. Then, different Machine Learning (ML) models leveraged recognize whether a human or a chatbot generated a text. The ML models applied include Logistic Regression, Random Forest, Decision Tree, Gradient Boosting, Naïve Bayes, and XGBoost. For this study accuracy, precision, recall, F1-score were used to evaluate the system. The dataset first was split into 80% for training and 20% for testing. Out of all implemented models, the Random Forest model reported the best with accuracy of 88%. Logistic Regression reported a close accuracy of 85%. The Random Forest model showed an 8% improvement compared to previous studies that reported an accuracy of 80%. Confusion matrices revealed that the Random Forest model provided high precision and recall, minimizing classification misleading of human or chatbot text. The research focused on studying the ability of ML models in identifying human vs. chatbot-generated text. The results showed the RF model was the best among other models with 88% accuracy. This accuracy shows a possible usage of such models in real-world applications that requires the confidentiality of human writing.
Read MoreDoi: https://doi.org/10.54216/JISIoT.160113
Vol. 16 Issue. 1 PP. 152-165, (2025)
This paper proposes an enhanced Non-Dominated Sorting Genetic Algorithm -II algorithm to optimize IoT service composition by incorporating national energy consumption requirements and user experience, areas often overlooked in traditional models that primarily focus on time, cost, and quality. The original NSGA-II algorithm is prone to premature convergence and local optima issues during population iteration. To address these limitations, we introduce a novel evaluation model and improve the elite retention strategy of the NSGA-II algorithm. The improved algorithm balances exploration and exploitation through dynamic crowding distance adjustment and adaptive selection pressure, enhancing diversity and avoiding local optima. Experimental results demonstrate that the I-NSGA algorithm not only reduces running time by 5.916% but also achieves a smoother Pareto surface, indicating a more optimal distribution of solutions. The novelty of this approach lies in its comprehensive inclusion of energy consumption and user experience, the timeliness in addressing emerging IoT optimization challenges, and the relevance to current IoT service composition needs. This validates the effectiveness and advancement of the proposed model and algorithm, providing a robust and efficient solution for IoT service composition optimization.
Read MoreDoi: https://doi.org/10.54216/JISIoT.160114
Vol. 16 Issue. 1 PP. 166-175, (2025)
Wireless sensor networks have become a vital component of the infrastructure for many modern applications. With the increasing use of wireless sensor networks, the challenges facing these networks in the field of security are escalating and growing, and with the rapid advancement of wireless communication technology, these networks are exposed to increasing, complex and continuous threats. Our research is characterized by innovation in the field of security technology to enhance protection, repel attacks and detect intrusions, among these innovations are intrusion detection systems based on machine learning as a creative and new solution. In this research, we highlight the effectiveness of different machine learning algorithms, such as supervised and unsupervised learning, in detecting anomalies and intrusions within wireless sensor networks, as our goal focuses on enhancing the security of wireless sensor networks (WSNs) by adopting intrusion detection systems (IDS) based on machine learning techniques. In this context, with a focus on using the WSN-DS dataset. The results of this research showed that machine-learning models could improve the security efficiency of wireless sensor networks by achieving accuracy ranging from 91% to 99.7% and testing time ranging from 0.006 to 0.1249, which enhances the ability to effectively retrieve and detect threats in real time.
Read MoreDoi: https://doi.org/10.54216/JISIoT.160115
Vol. 16 Issue. 1 PP. 176-188, (2025)
Nowadays, Vehicular communication is used in intelligent transmission applications. The number of vehicles used in a particular region has numerously increased energy consumption, computation delay, and computation overhead. In this paper, Multi-Objective Optimization in Satellite Assisted UAVs (MO-SAUVs) is proposed under an improved Ant Colony Optimization (IACO) algorithm. The procedures that are considered for the process of MO are optimal logistics distribution, path prediction-based pheromone deposition, and evaporation. Using this method, effective region selection for the UAVs is performed which leads to improving the network energy efficiency by decreasing energy consumption and delay. The simulation is performed in NS2 and the proposed MO-SAUAVs method is compared with the TA-SAUAVs method and PL-SAUAVs method according to different parameters. The results show that the proposed MO-SAUAVs method achieves lower computation delay (70ms to 110ms), higher energy efficiency (6% to 16%), lower energy consumption (7% to 14%), and packets lower computation overhead (500 packets to 700) when we were compared with TA-SAUAVs and PL-SAUAVs.
Read MoreDoi: https://doi.org/10.54216/JISIoT.160116
Vol. 16 Issue. 1 PP. 189-198, (2025)
Internet of things (IoT) is an intelligent combination of embedded systems, cloud computing and wireless communications. However, the data privacy and leakage problems are considered as the major deadlocks for deploying the IoT devices in the real time fields. Nevertheless, the complication of Distributed Denial of Service (DDoS) hazard on the IoT devices recent surge has seen an uptick, making it prone to numerous threat complications. For this reason, prompt detection of these attacks plays a pivotal role to safeguard the user’s data. The AI methodology of Machine and Deep Learning Models engaged for the designing the intelligent systems to provide the secured environment to safeguard the network against the various attacks. However, the computational overhead of deep learning model handicaps to deploy it in the IoT-Cloud environment. To tackle this issue, the present article suggests the novel hybrid learning based detection system called CAT-FEED-NETS that incorporates the Deep feed forward neural networks (DFFNN) where the hyper parameters are tuned by the Cat Swarm Intelligence Algorithms. Comprehensive trials and analysis are performed using NSL-KDD and UNSW datasets and criteria to assess the efficacy of quality measurements such as accuracy, precision recall, F1-score and model building time (MBT) is evaluated and analysed. Evaluation results are weighted against the various DL algorithms with the suggested model exhibiting better results than the other models by producing 0.96 accuracy, 0.956 precision, 0.955 recall and 0.9834s of MBT respectively. The proposed framework had proved its superiority in predicting the cloud attacks than the other existing frameworks.
Read MoreDoi: https://doi.org/10.54216/JISIoT.160117
Vol. 16 Issue. 1 PP. 199-210, (2025)
Mobile Ad-hoc Network is a structure of dynamic cellular network devices with no fixed architecture. Due to the network's constantly changing environment, characterized by frequent changes in its topology routing becomes a major challenge in MANET, which can reduce the overall network efficacy. As routing protocol plays a vital role in MANET, the energy-efficient routing model can enhance network longevity with a minimal rate of energy consumption. This paper uses a Temporally Ordered Routing Algorithm (TORA) to attain a higher scalability rate and an Elephant Herding Optimization (EHO) model to employ energy-efficient routing protocol features. The computations of the proposed model include the length of the route (LR) in optimal route selection and the energy level of routes (ER). It devises the routing problem as an optimization issue and further incorporates EHO for route selection, enhancing the weighted rate of LR and ER. The experimentations are carried out using the NS-3 simulation tool and factors such as latency, packet success rate, throughput, reliability, and energy depletion rate. Through a comparative analysis of the results with the previous works, the effectiveness of the proposed model is demonstrated.
Read MoreDoi: https://doi.org/10.54216/JISIoT.160118
Vol. 16 Issue. 1 PP. 211-222, (2025)
Alzheimer’s disease (AD) is a serious diseases distressing society. AD is a complex disease associated with many risk factors, such as aging, genetics, head trauma, and vascular disease. AD is also influenced by environmental factors such as heavy metals and trace metals. The pathology of AD, including amyloid-peptide (Aβ) protein, neurofibrillary tangles (NFTs), and synaptic loss, is still unknown. There are many explanations for the causes of AD. Cholinergic dysfunction is a main danger factor for Alzheimer's disease, whereas others believe that abnormalities in the production and treating of Aβ protein are the primary cause. However, there is currently no accepted hypothesis explaining the pathogenesis of AD. Magnetic resonance imaging is used to diagnose Alzheimer's disease. Our new AD pathogenesis showed 99.77% accuracy with 0.2% efficiency loss and outperformed VGG16, MobileNet2, and Inception V3 without the Adam optimizer and folder hierarchy.
Read MoreDoi: https://doi.org/10.54216/JISIoT.160119
Vol. 16 Issue. 1 PP. 223-232, (2025)
Natural Language Processing (NLP)-driven applied linguistics for sarcasm detection includes computational models to understand and identify sarcastic expressions within text. This interdisciplinary method integrates linguistics principles with advanced NLP techniques to identify subtle and nuanced cues indicative of sarcasm correctly. It includes computational approaches like linguistic feature extraction, machine learning models, and sentiment analysis. Furthermore, deep learning (DL) algorithms, including transformers and recurrent neural networks (RNNs), hold significant potential in capturing complex linguistic nuances inherent in sarcastic expression. These approaches can learn the hierarchical representation of text, which enables capturing context dependency, which is crucial for accurately detecting sarcasm. The applications of NLP-driven applied linguistics for sarcasm detection show great potential in various domains namely social media analysis, online content moderation, and customer feedback interpretation. By automating sarcasm detection, this system can enhance communication understanding, improve sentiment analysis accuracy, and contribute to better decision-making processes in various contexts. This study develops automated Sarcasm Detection using the Artificial Hummingbird Algorithm with Deep Learning (ASD-AHADL) technique. The ASD-AHADL technique applies the optimal DL model for detecting sarcastic content. To achieve this, the ASD-AHADL technique undergoes data preprocessing and the BERT-based word embedding process at the initial stage. Followed by the ASD-AHADL technique uses attention-gated recurrent unit long short-term memory (AGRU-LSTM) for the sarcasm detection process. At last, the AHA-based parameter tuning process is involved to fine-tune the parameters based on the DL algorithm. The experimental study of the ASD-AHADL technique has been tested under a social media dataset. The outcomes indicated that the solution of the ASD-AHADL technique was significant compared to others.
Read MoreDoi: https://doi.org/10.54216/JISIoT.160120
Vol. 16 Issue. 1 PP. 233-244, (2025)