Wireless sensor network (WSN) performs monitoring of each aspect of the area of interest by detecting the surrounding physical phenomena with sensor nodes and transferring the information to the gateway through the corresponding system. Several researcher workers have introduced localization methods to accomplish high accuracy of localization. An intelligent optimization technique has attracted various researcher workers due to its advantages such as strong optimization capability and few parameters to optimize the localization performance of the DV-Hop method. Sink node localization (NL) using metaheuristics in WSN includes applying optimization techniques inspired by human behavior or natural phenomena to define the geographical coordinates of the sink nodes within the network coverage region. WSNs can accomplish better localization performance, especially in dynamic or complex environments, improving the efficiency and reliability of network management and data transmission by leveraging metaheuristics. In this view, this manuscript develops a Dung Beetle Optimization based Sink Node Localization Approach (DBO-SNLA) for WSN. In the DBO-SNLA technique, the DBO algorithm involved is based on the social behavior of dung beetle populations and is developed with five updated rules to assist in finding high-quality solutions. In addition, the DBO-SNLA technique addresses the issues of defining the sink node location with lowest localization error once the data between the nodes is transferred wirelessly. Finally, the localization errors are calculated and the location of the different unknown nodes is computed. A detailed set of simulation takes place to examine the performance of the DBO-SNLA technique. The empirical analysis stated the betterment of the DBO-SNLA method than other techniques
Read MoreDoi: https://doi.org/10.54216/JISIoT.130101
Vol. 13 Issue. 1 PP. 08-20, (2024)
This study investigates the challenges to the digitalization technology adoption in Malaysia agriculture sector by using the DEMATEL (Decision-Making Trial and Evaluation Laboratory) approach, which will give a complete knowledge of the interdependencies among the barriers. The research objectives are to determine the cause and effect of digital agriculture using DEMATEL and to recommend the best way to overcome the obstacles in using digital technology. The findings from this study reveals the cause and effect from the barriers which is lack of skills, lack of technology, high cost, infrastructure and connectivity, and resistance to change are in the cause group while limited locality, data privacy and security concerns, low level of education, market access and regulatory and policy are in the effect group. The research findings are utilized to give policymakers and stakeholders with practical recommendations aimed at addressing the identified barriers and promoting the adoption of digital technologies in Malaysian agriculture. Thus, this study offers recommendations for the most important obstacles found, which are an improvement in infrastructure and the implementation of financial assistance mechanisms. All things considered, this research makes a significant contribution to the subject of agriculture and sheds light on the difficulties associated with implementing new technologies in Malaysia's agriculture industry.
Read MoreDoi: https://doi.org/10.54216/JISIoT.130102
Vol. 13 Issue. 1 PP. 21-30, (2024)
Medical care conveyance has been transformed by the Internet of Things (IoT's) combination into wellbeing systems, which provides doctors and patients with continuous on-request services. However, this coordination poses questions with respect to the precision of the information and possible security risks. This research expects to present a sharp character the executives structure planned for IoT and distributed computing based personalized medical care frameworks. The purpose is to upgrade confirmation processes while restricting security threats through the double-dealing of multimodal encoded biometric features. The suggested approach incorporates biometric-based continuous authentication together with combined and concentrated personality access strategies. To safeguard patient information in the cloud, it combines electrocardiogram (ECG) and photoplethysmogram (PPG) signals for authentication, which is further bolstered by homomorphic encryption (HE). An AI (ML) model was used to assess the system's reasonability including a dataset of 20 clients in various seating configurations. The merged based biometric structure defeated standalone ECG or PPG signal-based procedures in perceiving and authenticating every client with 100% exactness. The proposed framework makes significant improvements to the privacy and security of personalized healthcare frameworks. It fulfills the essential security necessities and is by the by viable enough to run on low-end processors. It guarantees trustworthy authentication and protects against conventional security threats by utilizing multimodal biometric features and cutting-edge encryption techniques.
Read MoreDoi: https://doi.org/10.54216/JISIoT.130103
Vol. 13 Issue. 1 PP. 31-45, (2024)
An optimal solution for monitoring air pollution, the Internet of Things (IoT)-enabled system delivers real-time data and insights on the air quality within a specific location. Air pollution poses a substantial risk to human health worldwide, with pollutants like nitrogen dioxide, particulate matter, ozone, and sulfur dioxide contributing to a range of cardiovascular and respiratory ailments. Monitoring air pollution levels is critical to understand the effect on public health and the environment. Air Pollution Monitoring includes the systematic analysis and measurement of pollutant concentration in the air, through a network of monitoring stations equipped with instruments and sensors. This station provides real-time data on air quality, allowing authorities to evaluate issue warnings, and pollution levels, and implement strategies to alleviate its negative impact. Machine learning (ML) approaches are becoming more integrated into air pollution monitoring systems for enhancing efficiency and accuracy. By analyzing vast quantities of information gathered from satellite imagery, monitoring stations, and other sources, ML approaches could detect patterns, forecast pollution levels, and pinpoint sources of pollution. This study introduces Air Pollution Monitoring and Prediction using African Vulture Optimization Algorithm with Machine Learning (APMP-AVOAML) model in IoT environment. The drive of the APMP-AVOAML methodology is to recognize and classify the air quality levels in the IoT environment. In the APMP-AVOAML technique, a four stage process is encompassed. Firstly, min-max normalization is applied for scaling the input data. Secondly, a harmony search algorithm (HSA) based feature selection process is executed. Thirdly, the extreme gradient boosting (XGBoost) model is utilized for air pollution prediction. Finally, AVOA based parameter selection process is exploited for the XGBoost model. To illustrate the performance of the APMP-AVOAML algorithm, a brief experimental study is made. The resultant outcomes inferred that the APMP-AVOAML methodology has resulted in effectual outcome.
Read MoreDoi: https://doi.org/10.54216/JISIoT.130104
Vol. 13 Issue. 1 PP. 46-58, (2024)
Cardiovascular diseases detection or diagnosis on appropriate time is crucial to avoid health complications. In this study, an advanced procedure for classifying changes in the blood pressure has been used analyzing the wave-forms inside the arterial system where such variation can occur due to improper timing in intra-aortic balloon pump (IABP) control. Inaccurate pressure extends with probable injury can be caused by improper timing in the heart valve in both pumping and compression of the balloon. This investigation focuses on accurately recognizing and classifying any irregularities in the artery wave-forms in IABP in the blood pressure initiated by mistiming. Accumulated blood pressure records are used for the progression of providing information to IABP trainer. The wave-forms require pre-handling employing image digitizing software to acquire automated identifications. Any undesirable image features have been removed using Wavelet in MATLAB software. Afterward, such features can be employed to develop a technique for arrangement depending on neural networks. The artificial neural network technique has used marked data to properly detect irregularities in wave-forms in vascular blood pressure due to improper IABP timing. As a result, the validation has proved to appropriately recognize and classify such anomalies, denoting a considerable prospect to improve patient protection with an efficacy of treatment in the area of cardiovascular prescription.
Read MoreDoi: https://doi.org/10.54216/JISIoT.130105
Vol. 13 Issue. 1 PP. 59-70, (2024)
Cardiovascular diseases (CVD) stand as the leading cause of global mortality, claiming millions of lives annually. An electrocardiogram (ECG) records the heart's electrical activity based on the Internet of Things (IoT), crucial in detecting cardiac arrhythmias (CA), characterized by irregular heart rates and rhythms. Signals from the MIT-BIH Arrhythmia Physio net database are analyzed. This chapter aims to propose a hybrid approach merging Genetic Algorithm-Support Vector Machine (GSVM) and Particle Swarm Optimization-Support Vector Machine (PSVM) for CA classification. The study introduces an algorithm for categorizing ECG beats into six groups using Independent Component Analysis (ICA)-derived features. Optimal SVM settings are determined using Genetic Algorithm (GA) and Particle Swarm Optimization (PSO) on ICA features computed via non-parametric power spectral estimation. The research delves into the origins and methodologies of GA and PSO. Simulation results comparing GSVM and PSVM are presented, emphasizing PSVM's superior performance in accuracy, sensitivity, specificity, and positive predictivity. Detailed performance metrics, including Sensitivity, Specificity, Positive Predictivity, and Accuracy percentages, are scrutinized and compared against the top classifier. The findings endorse PSVM's superiority over GSVM, indicating enhanced performance across multiple evaluation criteria.
Read MoreDoi: https://doi.org/10.54216/JISIoT.130106
Vol. 13 Issue. 1 PP. 71-82, (2024)
This research introduces a novel approach to intelligent IoT-based audio signal processing for healthcare applications. Leveraging advanced feature extraction techniques such as Mel-Frequency Cepstral Coefficients (MFCC) and Wavelet Transform, combined with sophisticated classification models like Convolutional Neural Networks (CNNs) and Support Vector Machines (SVMs), the proposed method demonstrates superior performance in accurately classifying healthcare data. Through extensive experimentation and analysis, the method achieves high accuracy, precision, recall, and F1 score, while exhibiting robustness in discriminating between different classes and maintaining precision in classification, as evidenced by its high AUC-ROC and AUC-PR values. The ablation study provides insights into the significance of key components and parameters, offering guidance for further refinement and optimization of the method. Overall, the proposed method holds promise for revolutionizing healthcare management through proactive monitoring and intervention, leading to improved patient outcomes and healthcare delivery.
Read MoreDoi: https://doi.org/10.54216/JISIoT.130107
Vol. 13 Issue. 1 PP. 83-98, (2024)
Reducing the influence of significant noise signal components on the obtained raw ECG signal is essential for precise identification of cardiac arrhythmias (CA), which frequently present as irregularities in heart rate or rhythm. Preprocessing is used to remove noise signals and baseline drift from the ECG wave that is recorded using the internet of things (IoT). After that, the denoised signal is subjected to dimensionality reduction and feature extraction. In order to determine whether classification method is more effective in detecting cardiac arrhythmias, this study compares two methods: an adaptive neuro-fuzzy inference system and artificial feed-forward neural networks trained with the back-propagation learning algorithm. An Adaptive Neuro Fuzzy Inference System analyses ICA features obtained by non-parametric power spectral estimates, and an Artificial Neural Network (ANN) classifier uses the ECG signal's morphological and statistical aspects to identify patterns. The creation of artificial feed-forward neural networks provides a rich framework for studying the Back Propagation Algorithm. Sensitivity, specificity, accuracy, and positive predictiveivity are some of the performance characteristics that are thoroughly examined. An overall accuracy of 97.79%, sensitivity of 99.82%, specificity of 99.68%, and positive predictivity of 98.58% were seen in the results of the Artificial Neural Feed Forward Network (ANFFN). The Adaptive Neuro Fuzzy Inference System (ANFIS) outperforms these metrics with an astounding overall accuracy of 99.62%, specificity of 98.63%, and positive predictivity of 99.46%. With a classification accuracy of 99.82%, ANFIS demonstrates to be the most effective classifier for identifying cardiac arrhythmias.
Read MoreDoi: https://doi.org/10.54216/JISIoT.130108
Vol. 13 Issue. 1 PP. 99-110, (2024)
These days, diabetes is an incurable disease, with millions of people suffering from it worldwide. Several variables namely lack of education, crowded living conditions, obesity and improper diet are among the causes of this recent upsurge in diabetes cases. They are identified by the name of infections induced by bacteria or viruses, harmful compounds in food, autoimmune reactions, obesity, unhealthy lifestyles, and pollution in the environment. Excessive and sight-threatening diabetic retinopathy (DR) is the most common retinal micro-vascular dysfunction that is characterized by the occurrence of a disorder of retinal blood vessels resulting in impaired vision. The IoT-based work is conducted in this work on the machine learning (ML) techniques, K-Nearest Neighbor (KNN) and Support Vector Machine (SVM). The classification of diabetic retinopathy is a topic that is under research. The range of activities of the processes of downsampling, labelling, image flattening, and format conversion is all within the dataset preparation process. An advanced prognosis model is designed which follows a combination of two machine learning techniques such as SVM and KNN. This approach classifies the images of diabetic retinopathy into five segments (subclasses), thus facilitating in-depth analysis. Our solution proposal in this case is a superior one because of its higher classification accuracy and faster processing speed as the findings showed. The robustness and accuracy that the SVM is known for are ensured by the convergence of the KNN to the SVM. The paper also proves a close linkage of clinical symptoms and blood sugar readings to an algorithmic DM prediction system that is based on IoT and ML approaches. This is another advantage of this method that it outperforms the existing classification methods. Amongst all the classifiers that we used in this project, the KNN ML classifier turned out to be the most accurate one with an accuracy rate of 93%. It was found that the algorithm performed with a 79% accuracy rate after tough testing and training and it was consistently providing number one quality DM predictions.
Read MoreDoi: https://doi.org/10.54216/JISIoT.130109
Vol. 13 Issue. 1 PP. 111-121, (2024)
This manuscript proposes Strategic Improved K-Means Clustering to simplify blood donor data analysis and distribution. The technique optimizes blood donor system resources via K-Means++ initialization, hierarchical clustering, and smart data dissemination. The paper begins with a comprehensive overview of clustering techniques and their healthcare applications. It illustrates the need for contemporary blood donor data analysis methods for cluster quality and resource allocation. Cluster purity, silhouette coefficient, Davies-Bould in the index, and other performance indicators are used to rigorously compare the recommended technique to 10 established clustering methods. The approach routinely fulfils these conditions, proving that it creates accurate, well-fitting groupings. Ablation tests how much-enhanced initialization, hierarchical clustering, and strategic data placement improve the entire. The study found that these make the procedure dependable and successful for numerous sorts of data. The study shows that the approach may be applied to other data besides blood donor data. Hierarchical clustering provides important information about the dataset's hierarchical patterns, making clustering findings easier to grasp. Resources are better distributed with strategic data dissemination. The recommended strategy is effective in emergencies and areas with changing blood needs. To conclude, Strategic Improved K-Means Clustering evaluates and distributes blood donor data comprehensively. Its flexibility, adaptability, and speed make it excellent for managing healthcare resources and making collective choices.
Read MoreDoi: https://doi.org/10.54216/JISIoT.130110
Vol. 13 Issue. 1 PP. 122-134, (2024)
This paper provides two different methods to diagnose osteoporosis in women; the first method is the fractal analysis evaluated by CBCT at two bone locations (the mandible and the second cervical vertebrae) to see if there is any correlation between the two. At the same time, the second method is deep convolutional neural networks (DCNNs). One hundred eighty-eight patients' mandibular CBCT images were used, and DCNN models based on the ResNet-101 framework were employed. Dual X-ray absorptiometry of the hip and lumbar spine revealed that 139 of the 188 postmenopausal women tested had osteoporosis, whereas 49 had average bone mineral density. The second cervical vertebra and the mandible were selected as locations of interest for FD analysis on the CBCT images. Measurement accuracy, both within and between observers' agreements, and correlations between two data sets were all calculated. To evaluate osteoporosis, we used a segmented, three-phase approach. Stage 1 was devoted to the identification of mandibular bone slices. In Stage 2, the coordinates for the mandible's cross-sectional views were established, and Stage 3 calculated the thickness of the mandible bone, emphasizing osteoporotic variations. The average FD values within the interest area of the mandible were significantly lower in people with osteoporosis than in those with average bone mineral density. At the same time, the two groups had no significant difference in FD values at the second cervical vertebra. For the mandibular site, areas beneath the curve were 0.644 (P = 0.008), while the area under the curve for the vertebral site was 0.531 (P = 0.720). DCNN training in the first stage yielded an astounding 98.85% training accuracy, the second stage decreased L1 loss to a meager 1.02 pixels, and the bone thickness computation method used in the last stage had a mean squared error of 0. 8377. We concluded that FD was underutilized even though it distinguished between women with normal BMD and those with osteoporosis in the mandibular area. Additionally, even with small mandibular CBCT datasets, the results show the value of a modular transfer learning approach for osteoporosis detection.
Read MoreDoi: https://doi.org/10.54216/JISIoT.130111
Vol. 13 Issue. 1 PP. 135-150, (2024)
Brain tumor classification using medical images is crucial for identification and therapy. However, brain tumors are complex and vary, making grouping them difficult. This work demonstrates a novel transfer learning method for brain tumor classification. We employ trained Convolutional Neural Networks (CNNs) models and data enrichment approaches to extract meaningful information from medical images. We want to fine-tune the models built on our dataset to uncover hierarchical patterns that distinguish tumor types. Through data enrichment, the training sample becomes more diverse and richer, making the model more generic and robust. Our team's extensive testing and research have shown that the suggested procedure can identify brain tumors. Our machine-learning approach performs better than others in terms of accuracy, sensitivity, specificity, and precision. Our technique improves brain tumor categorization and assures accurate clinical diagnosis. Automated testing systems are one way for physicians to assist patients in selecting the best course of treatment. Researchers may improve classification performance by incorporating modern imaging technology or topic-specific data. The Internet of Things, or IoT, is helping to drive the development of complex real-time data collection, processing, and sharing systems. These technological advancements have transformed medical imaging. This graphic depicts a cutting-edge transfer learning system that may be able to identify brain cancer from medical photos. This technology has the potential to enhance data collection and processing via the Internet of Things. Data augmentation and pre-trained convolutional neural networks may help to extract interpretable medical images. The Internet of Things improved the model's flexibility, resilience, and utility. We achieved this by expanding the training data set. Rapid categorization advancements have made clinical diagnosis more efficient. Classification, deep learning, medical imaging, machine learning, transfer learning, tumor detection, and image analysis all relate to this topic.
Read MoreDoi: https://doi.org/10.54216/JISIoT.130112
Vol. 13 Issue. 1 PP. 151-165, (2024)
Deep Learning, or DL for short, is an emerging subfield within the larger discipline of machine learning in today's world. The study being conducted in this area is progressing at an immediate stride, and the discoveries are contributing to the progression of technology. Deep learning (DL) methods were developed with the intention of developing a general-purpose learning method that would enable the gradual learning of characteristics at multiple levels without relying on human-engineered features. This was the goal of deep learning. Because of this, the system is able to acquire intricate purposes and directly map input to output by making use of the data that it has acquired which is based on Internet of things (IoTs). This study places an emphasis on the application of CNN (Convolutional Neural Networks), which are a subcategory of DNN (Deep Neural Networks), and it develops an efficient layered CNN for the classification of ECG arrhythmias. Even while FC-ANNs (Fully Connected Artificial Neural Networks), which are sometimes referred to as Multilayer-Perceptron networks, are effective in categorising ECG arrhythmias, the optimization process for many classification networks takes a significant amount of time in terms of computation. In addition, the features extracted by engineers are what define the accuracy of the categorization of ECG arrhythmias. An improved CNN based filtering, feature abstraction, and classification prototypical is established in order to conduct an accurate analysis of an electrocardiogram (ECG). When measured against ANN, the performance was found to have an accuracy rating of 99.6%. Consequently, the CNN model that was suggested is useful to doctors in arriving at the definitive diagnosis of AFL (atrial flutter), AFIB (atrial fibrillation), VFL (ventricular flutter), and VT (ventricular tachycardia). It includes denoising, feature extraction, and categorization as part of its functionality.
Read MoreDoi: https://doi.org/10.54216/JISIoT.130113
Vol. 13 Issue. 1 PP. 166-176, (2024)
In recent years, machine learning (ML) has shown a significant impact in tackling various complicated problems in different application domains, including healthcare, economics, ecological, stock market, surveillance, and commercial applications. Machine Learning techniques are good enough to deal with a wide range of data, uncover fascinating links, offer insights, and spot trends. ML can improve disease diagnosis accuracy, predictability, performance, and reliability. This paper reviews various machine learning techniques applied to different medical datasets and proposes an ensemble method for helping in the early diagnosis of different diseases. The study compares existing machine learning techniques with the proposed ensemble method. The ensemble method uses the AdaBoost algorithm to combine the traits of choice trees, random forests, and support vector machines. Three feature selection techniques, Fisher’s score, information gain, and genetic algorithm, are used to select appropriate dataset features. The ensemble method also uses the K-fold cross-validation technique (where k=15) for validating results. SMOTE was employed to balance some of the datasets because they were quite unbalanced. All the methods used in this study are evaluated based on accuracy, AU Curve, Recall, Precision, and F1-score. The paper uses different medical datasets at the University of California Irvine and the Kaggle directory to compare machine-learning models with the proposed ensemble method. The encouraging results show that the ensemble method outperforms the existing machine-learning techniques. The paper thoroughly analyzes how machine learning is used in the medical industry, covering established technologies and their impact on medical diagnosis. An early diagnosis is needed to prevent people from deadly diseases. Hence, this study proposes an ensemble method that may be used to diagnose different diseases early.
Read MoreDoi: https://doi.org/10.54216/JISIoT.130114
Vol. 13 Issue. 1 PP. 177-195, (2024)
IoT devices produce a gigantic amount of data and it has grown exponentially in previous years. To get insights from this multi-property data, machine learning has proved its worth across the industry. The present paper provides an overview of the variety of data collected through IoT devices. The conflux of machine learning with IoT is also explained using the bibliometric analysis technique. This paper presents a systematic literature review using bibliometric analysis of the data collected from Scopus and WoS. Academic literature for the last six years is used to explore research insights, patterns, and trends in the field of IoT using machine learning. This study analyses and assesses research for the last six years using machine learning in seven IoT domains like Healthcare, Smart City, Energy systems, Industrial IoT, Security, Climate, and Agriculture. The author’s and country-wise citation analysis is also presented in this study. VOSviewer version 1.6.18 is used to provide a graphical representation of author citation analysis. This study may be quite helpful for researchers and practitioners to develop a blueprint of machine learning techniques in various IoT domains.
Read MoreDoi: https://doi.org/10.54216/JISIoT.130115
Vol. 13 Issue. 1 PP. 196-224, (2024)
A precise and reliable loan status prediction is of the essence for financial institutions, However, the lack of real-world data and biases within that data can greatly impact the accuracy of machine learning models. Another challenge faced by loan status prediction models is class imbalance, where one category (such as approved loans) is much more common than another (such as defaulted loans), leading to skewed predictions towards the majority class. This study inspects Generative Adversarial Networks (GANs) to augment the data and improve the machine learning models’ performance. Several machine learning (ML) models including but not limited to Support Vector Machines (SVM) and ensemble bagged trees were employed on a Kaggle loan dataset (380 samples). Baseline training and testing accuracies were 86.9% and 86.3% (SVM) and 84.5% and 82.1% (ensemble). ActGAN (Activating Generative Networks) was then utilized to generate synthetic data points for both accepted and rejected loans. Retraining the models with new augmented data showed remarkable improvements: SVM accuracies for training and testing rose to 94.4% and 93.4%, while ensemble models achieved 97.4% and 95.8%, respectively. Other ML models were also explored such as KNN, Decision tree and logistic Regression and showed promising results in terms of accuracy as compared to the state of art. These findings put forward that GAN-based data augmentation can enhance the performance of loan status prediction. Future research could explore GAN’s impact of different architectures and assess the general applicability of this approach.
Read MoreDoi: https://doi.org/10.54216/JISIoT.130116
Vol. 13 Issue. 1 PP. 225-233, (2024)
In the event of an epileptic attack, the Field-Programmable Gate Array (FPGA)-accelerated Convolutional Neural Network (CNN) model is paired with Electroencephalogram (EEG) acquisition equipment to produce a reliable production system that can be used in clinical medical diagnosis. Additionally, this study includes cybersecurity to protect both the epileptic patient’s data and the prediction system. Epilepsy is a frequent neurological disorder that manifests as recurrent seizures, a sign that indicates rapid intervention is necessary to minimize adverse events and improve patient health. The study provides a new real-time design for predicting epileptic seizures based on the Application-Specific Integrated Circuit (ASIC)-based Very Large-Scale Integration (VLSI) architecture. As a first step, EEG data from epilepsy patients were captured and pre-processed. Afterwards, faults and artefacts in the data were removed. Additionally, data was divided into short-time windows and then classified as either ictal, pre-seizure, or interictal. The CNN model was adapted for EEG signal analysis and then trained with categorized data. This technique is more effective and efficient for predicting epileptic seizures accurately, which is advantageous for patient monitoring and treatment. Additionally, cybersecurity measures were implemented to secure patient data and the prediction system.
Read MoreDoi: https://doi.org/10.54216/JISIoT.130117
Vol. 13 Issue. 1 PP. 234-250, (2024)
A precise and reliable loan status prediction is of the essence for financial institutions, However, the lack of real-world data and biases within that data can greatly impact the accuracy of machine learning models. Another challenge faced by loan status prediction models is class imbalance, where one category (such as approved loans) is much more common than another (such as defaulted loans), leading to skewed predictions towards the majority class. This study inspects Generative Adversarial Networks (GANs) to augment the data and improve the machine learning models’ performance. Several machine learning (ML) models including but not limited to Support Vector Machines (SVM) and ensemble bagged trees were employed on a Kaggle loan dataset (380 samples). Baseline training and testing accuracies were 86.9% and 86.3% (SVM) and 84.5% and 82.1% (ensemble). ActGAN (Activating Generative Networks) was then utilized to generate synthetic data points for both accepted and rejected loans. Retraining the models with new augmented data showed remarkable improvements: SVM accuracies for training and testing rose to 94.4% and 93.4%, while ensemble models achieved 97.4% and 95.8%, respectively. Other ML models were also explored such as KNN, Decision tree and logistic Regression and showed promising results in terms of accuracy as compared to the state of art. These findings put forward that GAN-based data augmentation can enhance the performance of loan status prediction. Future research could explore GAN’s impact of different architectures and assess the general applicability of this approach.
Read MoreDoi: https://doi.org/10.54216/JISIoT.130118
Vol. 13 Issue. 1 PP. 251-258, (2024)
Recently, Heart diseases is considered as the one of deadliest diseases which has resulted in the increased death rates across the globe. Predicting heart diseases requires vast experiences along with advanced knowledge. IoT and AI are two emerging technologies that help in heart disease prediction. High diagnostic accuracy with minimal processing overhead, however, continues to be a design problem for researchers. To address this problem, this paper develops the Intelligent IoT structure for the better prediction of cardiac diseases employing Harris Hawk Optimized Gated Modified Recurrent Units (HHO-M-GRU). The paper also proposes the real time data collection using IoT wearable test beds which comprises of electrocardiography sensors (ECG) interfaced with MICOTT Boards & ESP8266 transceivers. For later processing, the acquired data are saved on the cloud. The proposed deep learning network is utilized for evaluating the received heart data and used for predicting the heart diseases. Additionally, the suggested HHO-GRU is trained with the versatile datasets which consist of normal and abnormal stages of heart diseases. By calculating the suggested model's performance measures, including accuracy, precision, recall, specificity, and F1-score, a thorough experiment is conducted. The proposed framework was implemented in Keras libraries with Tensorflow 2.1.1 as backend. Furthermore, prediction performance and complexity overhead is compared using the other cutting-edge deep learning algorithms already in use to demonstrate the model's superiority. in predicting the heart diseases. The suggested approach beats previous models for learning with respect to of accurate prediction (99%) and minimal computing overhead, according to the results.
Read MoreDoi: https://doi.org/10.54216/JISIoT.130119
Vol. 13 Issue. 1 PP. 259-275, (2024)