The chronic metabolic disorder known as diabetes mellitus, which is defined by hyperglycemia, poses a significant threat to the health of people all over the world. The categorization is broken down into two primary categories: Type 1 and Type 2, with each category having its own unique causes and approaches to treatment. It is very necessary for the effective management of illnesses to have both the prompt detection and the exact prediction of outcomes. The applications of machine learning and data mining are becoming increasingly important as tools in this setting. The current research study analyses the usage of machine learning models, specifically Voting Ensembles, for the goal of predicting diabetes. Specifically, the researchers were interested in how accurate these models were. Using GridSearchCV, the Voting Ensemble, which consists of LightGBM, XGBoost, and AdaBoost, is fine-tuned to manage outliers. This may be done with or without the Interquartile Range (IQR) pre-processing. The results of a comparative analysis of performance, which is carried out, illustrate the benefits that are linked with outlier management. According to the findings, the Voting Ensemble model, when paired with IQR pre-processing, possesses greater accuracy, precision, and AUC score, which makes it more acceptable for predicting diabetes. Despite this, the strategy that does not use the IQR continues to be a workable and reasonable alternative. The current study emphasizes both the significance of outlier management within the area of healthcare analytics and the effect of data preparation procedures on the accuracy of prediction models. Both of these topics are brought up because of the relevance of the current work.
Read MoreDoi: https://doi.org/10.54216/JISIoT.120101
Vol. 12 Issue. 1 PP. 08-19, (2024)
This article introduces the Grey Wolf Optimizer (GWO) algorithm, a novel method aimed at tackling the challenges posed by the multi-objective Optimal Power Flow (OPF) problem. Drawing inspiration from the foraging behavior of grey wolves, GWO stands apart from traditional approaches by enhancing initial solutions without relying on gradient data collection from the objective function. In the domain of power system optimization, the OPF problem is widely acknowledged, involving constraints related to generator parameters, valve-point loading, reactive power, and active power. The proposed GWO technique is applied to IEEE 14-bus and 30-bus power systems, targeting four case objectives: minimizing cost with quadratic cost function, minimizing cost with inclusion of valve point, minimizing power loss, and minimizing both cost and losses simultaneously. For the IEEE-14 bus system, which requires meeting a power demand of 259 MW, GWO yields optimal costs of 827.0056 $/hr, 833.4691 $/hr, 1083.2410 $/hr, and 852.2255 $/hr across the four cases. Similarly, for the IEEE-30 bus system aiming to satisfy a demand of 283.4 MW, GWO achieves optimal costs of 801.8623 $/hr, 825.9321 $/hr, 1028.6309 $/hr, and 850.4794 $/hr for the respective cases. These optimal results are then compared with existing research outcomes, highlighting the efficiency and cost-effectiveness of the GWO algorithm when juxtaposed with alternative methods for solving the OPF problem.
Read MoreDoi: https://doi.org/10.54216/JISIoT.120102
Vol. 12 Issue. 1 PP. 20-32, (2024)
Brain tumors (BT) are abnormal cell growth from the brain or the surrounding cells. It is categorized into 2 major types such as malignant (cancerous) and benign (non-cancerous). Classifying and detecting BTs is critical for knowledge of their mechanisms. Magnetic Resonance Imaging (MRI) is a helpful but time-consuming system, that needs knowledge for manual examination. A new development in Computer-assisted Diagnosis (CAD) and deep learning (DL) allows more reliable BT detection. Typical machine learning (ML) depends on handcrafted features, but DL achieves correct outcomes without such manual extraction. DL methods, particularly convolutional neural networks (CNNs) and recurrent neural networks (RNNs) can exposed to optimum outcomes in the domain of medical image analysis, comprising the classification and recognition of BTs in MRI and CT scans. Thus, the study designs an automated BT Detection and Classification using the Osprey Optimization Algorithm with Deep Learning (BTDC-OOADL) method on MRI Images. The BTDC-OOADL technique deeply investigates the MRI for the identification of BT. In the proposed BTDC-OOADL algorithm, the Wiener filtering (WF) model is applied for the elimination of noise. Besides, the BTDC-OOADL algorithm exploits the MobileNetV2 technique for the procedure of feature extractor. In the meantime, the OOA is utilized for the optimum hyperparameter choice of the MobileNetv2 model. Finally, the graph convolutional network (GCN) model can be deployed for the classification and recognition of BT. The experimental outcome of the BTDC-OOADL methodology can be tested under benchmark dataset. The simulation values infer the betterment of the BTDC-OOADL system with recent approaches.
Read MoreDoi: https://doi.org/10.54216/JISIoT.120103
Vol. 12 Issue. 1 PP. 33-44, (2024)
Cloud computing (CC) refers to a current computing method that provides the virtualization of computing services as a utility to Cloud service users. Problems based on ineffective task mapping to cloud resource frequently happen in a cloud atmosphere. Task scheduling (TS), thus, means effective scheduling of rational allocation and computational actions of computing resource in certain limitations in the IaaS cloud network. Job scheduling was to allocate tasks to the most appropriate sources to reach more than one goal. Thus, choosing a suitable work scheduling technique for rising CC resource efficiency, whereas maintaining high quality of service (QoS) assurances, becomes a significant problem that remains to attract interest of researchers. Metaheuristic techniques shown remarkable efficacy in supplying near-optimal scheduling solutions for a complicated large-sized issues. Recently, a rising number of independent scholar has examined the QoS rendered by TS approaches. Therefore, this study develops an Energy Efficient Task Scheduling Strategy using Modified Coot Optimization Algorithm (EETSS-MCOA) for CC environment. The EETSS-MCOA method carries out the derivation of features and MCOA is applied to schedule tasks. In addition, the MCOA algorithm is derived by the combination of adaptive β hill climbing concept with the COA for enhanced task scheduling. The conventional COA is stimulated by the swarming characteristics of birds known as coots. The COA followed two distinct stages of bird movements on water surface. The experimental results of the EETSS-MCOA model are validated on CloudSim tool. The solutions attained by the EETSS-MCOA model are found to be better than the existing algorithms.
Read MoreDoi: https://doi.org/10.54216/JISIoT.120104
Vol. 12 Issue. 1 PP. 45-56, (2024)
The proposed Quantum Neural Networks (QNN) perform better than traditional machine learning models. The escalating complexity of malware poses a significant challenge to cybersecurity, necessitating innovative approaches to keep pace with its rapid evolution. Contemporary malware analysis techniques underscore the urgent need for solutions that can adapt to the dynamic functionalities of evolving malware. In this context, Quantum Neural Networks (QNNs) emerge as a cutting-edge and distinctive approach to malware analysis, promising to overcome the limitations of conventional methods. Our exploration of QNNs focuses on uncovering their valuable applications, particularly in real-time malware research. We meticulously examine the advantages of QNNs in contrast to conventional machine-learning methods employed in malware detection and classification. The proposed QNN showcases its unique capability to handle complex patterns, emphasizing its potential to achieve heightened levels of accuracy. Our contribution extends to introducing a dedicated framework for QNN-based malware analysis, harnessing the formidable computational capabilities of quantum computing for real-time malware analysis. This framework is structured around three pivotal components, Malware Feature Extraction utilizes quantum feature extraction techniques to identify relevant features from malware samples. Malware Classification employs a QNN classifier to categorize malware samples as benign or malicious. Real-Time Analysis enables the instantaneous examination of malware samples by integrating feature extraction and classification within a streaming data pipeline. Our proposed methodology undergoes comprehensive evaluation using a benchmark dataset of malware samples. The Proposed Quantum Neural Networks (QNNs) demonstrated a high accuracy of 0.95, outperforming other quantum models such as Quantum Support Vector Machines (QSVM) and Quantum Decision Trees (QDT), as well as classical models like Random Forest (RF), Support Vector Machines (SVM), and Decision Trees (DT) on the Malware DB dataset. The results affirm the framework's exceptional accuracy rates and low latency, establishing its suitability for real-time malware analysis. These findings underscore the potential for QNNs to revolutionize malware evaluation and strengthen real-time defenses against cyberattacks. While our research demonstrates promising outcomes, further exploration and development in this domain are imperative to fully exploit the extensive viability that QNNs offer for cybersecurity applications.
Read MoreDoi: https://doi.org/10.54216/JISIoT.120105
Vol. 12 Issue. 1 PP. 57-69, (2024)
Recently, Emotion detection utilizing EEG signals develops popularity in domain of Human-Computer Interaction (HCI). EEG (electroencephalography) is a non-invasive approach, which processes electrical action from the brain through electrodes located in the scalp. An emotion recognition approach could not only be significant for healthy people among them disabled persons for detecting emotional changes and is utilized for different applications. It is significant to realize that emotion recognition in EEG indications is a difficult task owing to difficult and subjective nature of emotions. In recent times, Machine learning (ML) algorithms like Random Forests or Support Vector Machines (SVM) and Deep Learning (DL) systems namely Recurrent Neural Network (RNN) or Convolutional Neural Network (CNN) are trained on EEG feature extracted and connected emotional labels for classifying the user emotional state. This study presents an Automated EEG-based Emotion Detection using Bonobo Optimizer with Deep Learning (AEEGED-BODL) technique on HCI applications. The goal of the study is to analyze the EEG signals for the classification of several kinds of emotions in HCI applications. To achieve this, the AEEGED-BODL technique uses Higuchi fractal dimension (HFD) approach for extracting features in the EEG signals. Besides, the AEEGED-BODL technique makes use of the quasi-recurrent neural network (QRNN) approach for the detection and classification of distinct kinds of emotions. Furthermore, the BO system was demoralized for optimum hyperparameter selection of QRNN model, which helps in attaining an improved detection rate. The simulation validation of AEEGED-BODL algorithm was simulated on EEG signal database. The comprehensive result stated best outcome of the AEEGED-BODL algorithm over other recent approaches
Read MoreDoi: https://doi.org/10.54216/JISIoT.120106
Vol. 12 Issue. 1 PP. 70-83, (2024)
Lung cancer detection is the process of detecting the presence of lung tumor or abnormalities in the lungs. Early diagnosis is crucial for increasing the chances of patient survival and successful treatment. When compared to X-rays, Computed Tomography (CT) images are more sensitive and are increasingly being used for the diagnosis and screening of lung tumors. They provide complete cross-sectional images of the lungs and it will even detect small lesions. AI and Machine learning (ML) approaches are most commonly employed to analyse medical images (e.g. CT scans) and detect lung cancer. This algorithm can help radiologists identify patterns indicative or subtle abnormalities of cancer. Medical diagnosis, particularly in complex diseases such as lung cancer, frequently involves ambiguity. The diagnostic system can alleviate ambiguity via cross-verifying findings from various sources by fusing multimodal features. Multimodal feature fusion using deep learning (DL) algorithm is an advanced technology that leverages the abilities of deep neural networks to combine data from three different modalities or sources for better robustness in several applications, namely natural language processing, image, and data analysis, etc. This study introduces a Multimodal Feature Fusion using an Optimal Transfer Learning Method for Lung Cancer Detection and Classification (MFFOTL-LCDC) methodology on CT images. The chief objective of the MFFOTL-LCDC methodology is to exploit the feature fusion process for the identification and classification of lung tumor. To attain this, the MFFOTL-LCDC model undergoes a multimodal feature fusion approach to derive feature vectors using 3 DL approaches such as SqueezeNet, CapsNet, and Inception v3 models. Besides, the MFFOTL-LCDC technique applies the remora optimization algorithm (ROA) for the hyperparameter choice of 3 DL models. For lung cancer recognition, the MFFOTL-LCDC algorithm exploits the deep extreme learning machine (DELM) algorithm. A series of simulations were conducted to ensure the greater lung cancer recognition outcomes of the MFFOTL-LCDC methodology. The extensive outcomes determine the improved results of the MFFOTL-LCDC technique over recent DL approaches.
Read MoreDoi: https://doi.org/10.54216/JISIoT.120107
Vol. 12 Issue. 1 PP. 84-96, (2024)
Sentiment analysis (SA) intends to categorize a text respective to sentimental polarity of individual opinions, like neutral, positive, or negative. The study of Hindi is limited because of the grammatical and morphological complexities of the Hindi language while many research work concentrates on drawing features from English text. The hindi languages make the sentiment classification procedure for Hindi short text a tedious process. The Hindi language has complicated morphology and variation based on phonetics, spelling, and vocabulary; the common usage of numerous dialects between Hindi in India produces a massive volume of glossaries. In this study, we introduce a Spider Monkey Optimization with stacked recurrent neural network (SMO-SRNN) for short text SA on Hindi Corpus. The proposed SMO-SRNN technique mainly aims to identify and categorize the Hindi short text into three distinct classes, namely negative, positive, and neutral. In the presented SMO-SRNN method, the SRNN approach is exploited for the investigation and classification of sentiment. Moreover, the SMO model is employed to finetune the hyperparameter related to the SRNN model. A detailed set of experiments is applied to ensure the high efficiency of the SMO-SRNN algorithm. The comparative outcome highlighted the enhancement of the SMO-SRNN technique over other methods.
Read MoreDoi: https://doi.org/10.54216/JISIoT.120108
Vol. 12 Issue. 1 PP. 97-109, (2024)
As the Internet and computer technology develop, more gadgets are linked wirelessly, expanding the Internet of Things (IoT). IoT is a huge network of sensors and gateways that links them. IoT devices generate images, music, video, digital signals, and more by interacting with their surroundings. To exchange resources and information, all IoT equipment and apps may connect to the Internet. Everything is connected in our world. Due to the broad deployment and massive size of IoT devices, access control of device resources is problematic. Obtaining IoT device resources unlawfully will have major implications since they include personal and sensitive information. Many systems and situations employ access control technologies to secure resources. Discriminatory, identity-based, and MAC access control schemes are traditional (mandatory access control). However, these centralized methods have single-point failure, scalability issues, poor dependability, and low throughput. IoT devices may belong to several organizations or people, be mobile, and function badly, making centralized access management problematic. Another innovative data management solution is blockchain, which uses distributed storage to stabilize data. A transaction writes the data reading or modification record into a block, and the blocks are connected as a chain using a hash to maintain data integrity. It synchronizes data between nodes via a peer-to-peer network and consensus process, assuring data consistency for blockchain network participants. Zero Trust-Based Blockchain, an open source blockchain development platform, offers more efficient consensus methods, larger throughputs, smart contracts, and support for different organizations and ledgers. Proposed work build the fabric-IoT access control system using Zero Trust-Based Blockchain to apply blockchain technology to IoT access control in this study. Distributed processing and storage for IoT data may solve these critical issues with blockchain. Thus, developing distributed IoT-based e-healthcare services using blockchain technology may have been feasible. FabricIoT can keep records, handle dynamic access control, and solve the IoT access control problem using distributed architecture.
Read MoreDoi: https://doi.org/10.54216/JISIoT.120109
Vol. 12 Issue. 1 PP. 110-128, (2024)
In the realm of cybersecurity, the incessant evolution of network attacks necessitates advanced and robust intrusion detection systems (IDS). The major issues with these systems are numerous: false positive/negative alarms, delayed response and detection time, size of processed data, adaptability to future threats, scalability of the system, difficulty in detecting distributed attacks, and downtime (fault tolerance). We propose a system that introduces a distributed framework aimed at enhancing network security by effectively identifying subtle deviations from normal network behavior. This is achieved through transfer learning based on artificial neural networks, and support vector machine (SVM), capitalizing on their complementary strengths in recognizing complex patterns and addressing high-dimensional datasets. To validate the efficacy of the proposed approach, the NSL-KDD dataset is utilized within a distributed IDS architecture. It consists of several intrusion detection nodes representing subnetworks. A node consists of two agents that work collaboratively. A way is proposed to avoid interference between analysis agents: the network agents manager monitors the functioning of the nodes and displays the results of each vulnerability-detecting node in each subnet separately. Such communication between agents should reduce FPAS (false positive alarms) significantly. The Detection engine extracts relevant features of network attacks to solve the problem of SVM in processing huge sizes of data and detect adaptive future threats to detect famous distributed denial of services (DDOS) attacks in real-time. The system is highly scalable by increasing the number of intrusion detection system nodes if necessary. Central processing is avoided to circumvent a system failure situation, where processing and decision-making take place at the detection node level within each subnet.
Read MoreDoi: https://doi.org/10.54216/JISIoT.120110
Vol. 12 Issue. 1 PP. 129-143, (2024)
The growing demand for wireless applications has led to network congestion issues. In response, network operators have recommended a transition to higher frequencies, particularly within the unlicensed millimeter-wave (mm-wave) spectrum. This shift aims to fulfill users' desires for rapid data transmission within personal networks, whether at home or in the office, exemplified by technologies like WiGig technology and indoor applications. However, a significant challenge in achieving high data transfer speeds within this frequency range lies in the design of the antenna. These antennas must strike a balance between size and performance. Microstrip Patch (MP) antennas have gained recognition for their compact form and seamless integration into mobile communication systems. Nonetheless, they grapple with limitations such as poor gain and narrow bandwidth, largely attributable to surface waves negatively impacting antenna performance. In this study, we introduce, design, and optimize an MP antenna tailored for 60 GHz applications. To enhance the performance of the MP antenna, we introduce various mushroom-like Electromagnetic Bandgap (EBG) structures. These structures address the propagation of the surface waves issue that affects the antenna performance. Additionally, to create a tunable frequency antenna the variable conductivity of the graphene material is used in the form of implanted slots on the patch and tuned by applied DC voltage on the slots. Finally, the parameters of the MP antenna undergo enhancement based on simulation results obtained through CST software.
Read MoreDoi: https://doi.org/10.54216/JISIoT.120111
Vol. 12 Issue. 1 PP. 144-157, (2024)