Fusion: Practice and Applications

Journal DOI

https://doi.org/10.54216/FPA

Submit Your Paper

2692-4048ISSN (Online) 2770-0070ISSN (Print)

Enhancing Heart Disease Diagnosis Using Machine Learning Classifiers

Ahmed A. H. Alkurdi

Heart diseases are the primary cause of death worldwide. The approximate mortality rate due to cardiovascular diseases is a staggering 18 million lives per year. many human lives could be saved with early and accurate diagnosis and prediction of such conditions. Thus, the automation of such a process is crucial and achievable with the rise of machine learning and deep learning capabilities. However, patient data is riddled with issues which must be resolved before they can be used for heart disease prediction. This research aims to improve the accuracy of heart disease diagnosis by utilizing data preprocessing techniques and classification algorithms. These techniques may provide an insight into predicting cardiovascular diseases from subtle clues before any major symptoms arise. The study employs the Heart Disease UCI dataset and follows a systematic approach to train machine learning models in the process of heart disease diagnosis. The approach utilizes a variety of data preprocessing techniques to prepare the data for model training such as MEAN missing value imputation, Normalization, Synthetic Minority Over-sampling Technique (SMOTE), and Correlation. Afterward, the preprocessed data is fed into four popular classification algorithms: Decision Tree, Random Forest, Support Vector Machine (SVM), and k-Nearest Neighbors (k-NN). These algorithms provide a broad evaluation of the dataset. The proposed methodology demonstrates promising results which clearly highlight the value and significance of data preprocessing. This is evident from the achieved accuracy, precision, recall, F1 score and ROC AUC results. In summary, the importance of preprocessing and feature selection is distinct when dealing with datasets containing various challenges. These crucial processes play a central role in building a trustworthy and precise model for heart disease prediction.

Read More

Doi: https://doi.org/10.54216/FPA.130101

Vol. 13 Issue. 1 PP. 08-18, (2023)

Fusion of Water Evaporation Optimization and Great Deluge: A Dynamic Approach for Benchmark Function Solving

Saman M. Almufti

The "Water Evaporation Optimization - Great Deluge" explores the synergy between the Water Evaporation Optimization Algorithm (WEOA) and the Great Deluge Algorithm (GDA) to create a novel fusion model. This research investigates the efficacy of combining these two powerful optimization techniques in addressing benchmark problems. The fusion model incorporates WEOA's dynamic exploration-exploitation dynamics and GDA's global search capabilities. By merging their strengths, the fusion model seeks to enhance convergence efficiency and solution quality. The study presents an experimental analysis of the fusion model's performance across a range of benchmark functions, evaluating its ability to escape local optima and converge towards global optima. The results provide insights into the effectiveness of the fusion model and its potential for addressing complex optimization challenges., a comprehensive performance analysis of the application of the proposed fusion model to a curated set of widely acknowledged benchmark functions, renowned for their role in evaluating the capabilities of optimization algorithms, is undertaken. By rigorously evaluating the convergence characteristics, solution quality, and computational efficiency of the algorithm, a thorough understanding of the strengths and limitations of WEOA is aimed to be provided. Through meticulous comparisons with established optimization techniques, illumination of the aptitude of WEOA in addressing diverse optimization challenges across a spectrum of problem landscapes is intended. The analytical insights, not only advancing the understanding of WEOA's applicability, but also furnishing valuable guidance for both researchers and practitioners in search of robust optimization methodologies, are proffered.

Read More

Doi: https://doi.org/10.54216/FPA.130102

Vol. 13 Issue. 1 PP. 19-36, (2023)

Regression Analysis and Artificial Neural Network Approach to Predict of Surface Roughness in Milling Process

Zaineb Hameed Neamah , Ahmad Al-Talabi , Asma A. Mohammed Ali

Surface roughness (Ra) has a significant influence on the fatigue strength, corrosion resistance, and aesthetic appeal of machine components. Ra is hence a crucial manufacturing process parameter. This study predicts Ra of aluminum alloy Al-7024 after milling. Regression analysis and artificial neural network (ANN) modeling approaches are suggested for predicting Ra values. For better surface roughness, the cutting parameter must be set properly. Spindle speed, feed rate, and depth of cut have been chosen as predictors. Through 31 study cases, regression and ANN were used to examine how these parameters affected Ra. The measurement of surface roughness, together with comprehensive Ra analysis and regression analysis. The findings of this investigation indicate that Ra was predicted by both the regression and ANN models. convergent results from model predictions are obtained. This convergence highlights the promising methodology used in this work to forecast Ra in the milling of Al-7024. The findings demonstrated that, in comparison to the regression model, which had an average variation from the actual values of roughly 1%, The surface roughness was accurately predicted by the ANN model.

Read More

Doi: https://doi.org/10.54216/FPA.130103

Vol. 13 Issue. 1 PP. 37-48, (2023)

Design of High-Performance Intelligent WSN based-IoT using Time Synchronized Channel Hopping and Spatial Correlation Model

Hamza M. Ridha Al-Khafaji , Refed Adnan Jaleel

Wireless Sensor Network (WSN) is one of the most significant contributors to the Internet of Things (IoT), and it plays a significant role in the lives of individuals. There are three main problems in the design of traditional WSN based-IoT. First problem about data; the WSN transmits a huge volume of data to the IoT for processing. The second problem is the energy; since sensor nodes rely on their limited battery, conserving energy is crucial, and the third problem about efficiency of transmission. This paper presents new WSN based IoT framework that integrate important techniques to solve these problems; To increase the effectiveness of data processing and storing, the intelligent Adaptive Boosting stochastic algorithm is applied. IEEE 802.15.4e time slotted channel hopping (TSCH) protocol is used because it has the benefits such as collision-free transmission and multi-hop transmission.  Data reduction at the Gateway (GW) level of the network is achieved through spatial correlation between sensors with the goal of conserving energy. Principle idea of this new framework is to identify the advantages of integrating the important techniques; intelligent Adaptive Boosting Stochastic diffusion search algorithm, TSCH, and Special correlation model. As a result, the proposed framework can thereby satisfy the need for a long battery life of low-rate applications and at the same time, the need for high throughput for high-rate uses also for testing it in achieved efficient classification of data, the important performance measures are used.

Read More

Doi: https://doi.org/10.54216/FPA.130104

Vol. 13 Issue. 1 PP. 49-58, (2023)

Fusion Methodologies of the Assessment of the Effectiveness of Digital Technologies in Commercial Banks

Muyassarzoda Fayzieva

The introduction and active use of modern digital technologies in commercial banks is becoming a modern trend in the banking sector and allows for improved quality of service to customers. At this point, the importance of assessing the effectiveness of the introduction of digital technologies in industries is increasing. Foreign methodologies for assessing the effectiveness of the introduction of digital technologies in various fields were studied, compared, analyzed, and identified. There are a few methodologies for assessing the effectiveness of digital technologies in the banking industry. The novelty of this research is the fusion of methodologies for assessing the development of digital technologies in commercial banks and determining the level of use of digital technologies offered by commercial banks. To increasing the effectiveness of the introduction of digital technologies in commercial banks, measures of a strategy are developed and recommended by the researcher for the effective development of digital technology offers by commercial banks in Uzbekistan.

Read More

Doi: https://doi.org/10.54216/FPA.130105

Vol. 13 Issue. 1 PP. 69-78, (2023)

Students’ Performance Prediction in Higher Education During COVID-19 Pandemic Based on Recurrent Forecasting and Singular Spectrum Analysis

Kismiantini , Shazlyn M. Shaharudin , Adi Setiawan , Rasyidhani Aditya Rizky , Salsa-Billa Syahida Al-Hasania , Murugan Rajoo , Hairulnizam Mahdin , Salama A Mostafa

The COVID-19 pandemic is a virus that is changing habits in human life worldwide. The COVID-19 outbreaks in Indonesia have forced educational activities such as teaching and learning to be conducted online. Teaching and learning activities using the online method are familiar, but the effectiveness of this method still needs to be investigated to be applied in all educational systems. This study used the predictive modeling of Recurrent Forecasting (RF) derived from Singular Spectrum Analysis (SSA) to know the online learning method's practicality on the student's academic performance. The fundamental notion of the predictive fusion model is to improve the effectiveness of several forms of forecast models in SSA by employing a fusion method of two parameters, a window length (L), and a number of leading components (r). This study used undergraduate students' grade point averages (GPA) from a public university in Indonesia through online classes during the COVID-19 epidemic. The experiments unveiled that a parameter of L = 14 ( ) yielded the finest prediction using the RF-SSA model with a root mean square error (RMSE) value of 0.20. Such a finding signified the ability of the RF-SSA to project the students' academic performance according to the GPA for the forthcoming semester. Nonetheless, developing the RF-SSA algorithm for greater effectiveness is essential to acquiring more datasets, such as by gathering a bigger group of respondents from several Indonesian universities.

Read More

Doi: https://doi.org/10.54216/FPA.130106

Vol. 13 Issue. 1 PP. 79-88, (2023)

A Review of Glowworm Swarm Optimization Meta-Heuristic Swarm Intelligence and its Fusion in Various Applications

Muhammad A. S. Mohd Shahrom , Nurezayana Zainal , Mohamad F. Ab. Aziz , Salama A. Mostafa

Natural phenomena inspire the meta-heuristic algorithm to carry out the aim of reaching the optimal solution. Glowworm swarm optimization (GSO) is an original swarm intelligence algorithm for optimization, which mimic the glow behavior of glowworm that can effectively capture the maximum multimodal function. GSO is part of the meta-heuristic algorithm used to solve the optimization problem. This algorithm solves many problems in optimization, especially in science, engineering, and network. Therefore, this paper review exposes the GSO method in solving the problem in any industry area. This study focuses on the basic flow of GSO, the modification of GSO, and the hybridization of GSO by conducting the previous study of the researcher. Based on this study, the GSO application in the engineering industry gets the highest score of 15% among other sectors.

Read More

Doi: https://doi.org/10.54216/FPA.130107

Vol. 13 Issue. 1 PP. 89-102, (2023)

A New Data Fusion Framework of Business Intelligence for Mining Educational Data

Nissreen El Saber , Aya Gamal Mohamed , Khalid A. Eldrandaly

Student academic performance can be affected by social, economic, and educational factors. Many research works studied these factors applying to different levels in the educational organizations’ models. The importance spans giving professional educational advice to vulnerable students, supporting the student’s development of special education-related skills, and encouraging students to handle their education challenges. For educational organizations, dealing with pandemics and other obstacles has proven to be essential for education sustainability. One way is to be proactive and use the power of exploring and discovering educational data to predict students’ performance and attitude. Mining educational data can benefit from Business Intelligence (BI) in visualizing, organizing, and extracting insights for student’s performance. Educational Data Mining (EDM) is used in this research to predict students' performance. A novel data fusion framework is introduced for Business Intelligence using educational data mining. This study aims to show the techniques that predict students' performance and the most effective methods for each of them. The proposed framework used the advantage of business intelligence concepts and tools to highlight the metrics providing better statistical and analytical understanding.

Read More

Doi: https://doi.org/10.54216/FPA.130108

Vol. 13 Issue. 1 PP. 103-116, (2023)

Network Intrusion Detection System using Convolution Recurrent Neural Networks and NSL-KDD Dataset

Manjunath H. , Saravana Kumar

Increase in network activity of transferring information online allows network breeches where intruders easily avail the most important information or data. The growth of online functioning and many other governmental data over the internet without security has caused data vulnerability; attackers can easily detect the data and misuse them. Network Intrusion Detection System (NIDS) has allowed this whole process of online data transfer to occur safely and secured transactions. Due to the cloud usage in network the huge amount of traffic is created as well as number of attacks are increased day by day. To prevent the vulnerability and its types are social, environmental, cognitive, military attacks in the network are classified using CRNN model.  We used ensemble learning methods in machine learning algorithms are used to detect and prevent the malicious packets in the network. Our model detects the unauthorized users intruding into any network and alerts the organization regarding the same. When a typical firewall is unable to effectively stop certain sorts of attacks on computer system usage and network communications, a network intrusion detection system may be used. First, we are classifying the unauthorized packets using machine learning algorithm. Using our concept, we have used neural networks in this paper to detect any such attack. For the Network Security Laboratory - Knowledge Discovery in Databases data set using CNN and RNN algorithms, we also applied a few well-known techniques as boosting and pasting methods. In this CRNN approach, we demonstrate that neural networks are more effective than other methods at detecting attacks.

Read More

Doi: https://doi.org/10.54216/FPA.130109

Vol. 13 Issue. 1 PP. 117-125, (2023)

Anomaly Detection in IoT Networks: Machine Learning Approaches for Intrusion Detection

Reem Atassi

The proliferation of Internet of Things (IoT) devices has ushered in an era of unprecedented connectivity and innovation. However, this interconnected landscape also presents unique security challenges, necessitating robust intrusion detection mechanisms. In this research, we present a comprehensive study of anomaly detection in IoT networks, leveraging advanced machine learning techniques. Specifically, we employ the Gated Recurrent Unit (GRU) architecture as the backbone network to capture temporal dependencies within IoT traffic. Furthermore, our approach embraces hierarchical federated training to ensure scalability and privacy preservation across distributed IoT devices. Our experimental design encompasses public IoT datasets, facilitating rigorous evaluation of the model's performance and adaptability. Results indicate that our GRU-based model excels in identifying a spectrum of attacks, from Distributed Denial of Service (DDoS) incursions to SQL injection attempts. Visualizations of learning curves, Receiver Operating Characteristic (ROC) curves, and confusion matrices offer insights into the model's learning process, discriminatory power, and classification performance. Our findings contribute to the evolving landscape of IoT security, offering a roadmap for enhancing the resilience of interconnected systems in an era of increasing connectivity.

Read More

Doi: https://doi.org/10.54216/FPA.130110

Vol. 13 Issue. 1 PP. 126-134, (2023)

Test Design Optimisation of factors and levels by Covering double and triple mode Combinations using Orthogonal Array test strategies and Random Forest Algorithm

S. Malathi , M. Sangeetha , Faiyaz Ahmad , Saravanan M. S. , T. Kalachelvi

Testing is a process of trying to find out every believable fault or weakness in a project. In today’s world, software products and components play a vital part in our life. Software testing is a world, it contains its own life cycle consists of the following stages – Requirements, Test Plan, Test Design, Test Execution, Defect reporting/tracking. The core of software testing lies in writing test cases based on specifications. Software testers play a vital role writing the test cases during test design phase of software testing life cycle. Research have proved that writing test cases is the most time killing and challenging activity among other testing life cycle phases. It is very crucial to sequence and write optimized test cases to increase the rate of fault identification during test design phase as early as possible. There are various proven test design techniques available which focuses on optimizing test cases in different test stages. Our key focus in this paper is to identify the optimized test cases minimizing the actual number of test cases with minimal effort using OATS (Orthogonal Array Test Strategy) techniques covering double mode and triple mode test combinations and Random Forest algorithm.

Read More

Doi: https://doi.org/10.54216/FPA.130111

Vol. 13 Issue. 1 PP. 135-146, (2023)

A Study on Artificial Intelligence-based Security Techniques for IoT-based Systems

Mustafa Al-Tahee , Marwa s. mahdi hussin , Mohammed Jameel Alsalhy , Hussein Alaa Diame , Noor Hanoon Haroon , Salem Saleh Bafjaish , Mohammed Nasser Al-Mhiqani

In a recent scenario, the Internet of Things (IoT) enables the Integration of disparate home automation systems into a unified network that can be managed from a single device, such as a smartphone. Connections to the Internet that aren't secure: A lack of security standards may make the Internet of Things devices vulnerable to assault, including hacking. Though current designs may address some security concerns inherent to the Internet of Things, most solutions suffer from two significant flaws. This only addresses a single threat at the level of IoT-edge architecture and cannot be expanded to deal with new threats as misunderstood obstacles. Second, its core operations are trustworthy and seldom require additional hardware to implement the advised security measures. The AI-SM-IoT framework is a three-tiered system incorporating security components based on AI motors into every IoT stack that communicates with the network's edge. AI motors were also added as a new transmission layer. This study suggests an AI-based security method for IoT environments (AI-SM-IoT system). This concept was based on the periphery of a network of AI-enabled security components for IoT disaster preparedness. The architecture recommends three main modules: cyber threat searching, intelligent firewalls for online applications, and cybercrime information. Based on the idea of the "cyberspace killing chain," the modules given detect, identify, and continue to identify the stage of an assault life cycle. It describes each long-term security in the suggested framework and demonstrates its usefulness in applications facing various risks. A distinct layer of AI-SM-IoT services is used to deliver artificial intelligence (AI) safety modules to address each risk in the boundaries layer. The architectural freedom from the project's essential regions and comparatively low latency, which offers safety as a service rather than an embedded network edge on the Internet of Things design, contrasted with the system framework's earlier designs. Based on the administration score of the IoT platform, throughput, security, and working time, it evaluated the proposed method

Read More

Doi: https://doi.org/10.54216/FPA.130112

Vol. 13 Issue. 1 PP. 147-161, ()

Optimizing Resource Management in Physical Education through Intelligent 5G-Enabled Robotic Systems

Maryam Ghassan Majeed , Waleed Hameed , Noor Hanoon Haroon , Sahar R. Abdul Kadeem , Hayder Mahmood Salman , Seifedine Kadry

Resource Management in Physical Education (RMPE) is the term used to describe the management of the curriculum, materials, and human resources needed for Physical Education (PE). Due to increased sports and physical activity participation, student performance in PE classes across all schools and universities has decreased. According to the analysis, it is hard for the available PE educators and managers to establish a relationship between all the resources. This study uses a robotic system with 5G capability for RMPE. The Big Data Analytics-based Artificial Neural Network method (BDA-ANNA) handles all PE resources in this computerized system. The BDA-ANNA can efficiently increase RMPE work quality and efficiency, enabling managers to obtain and save appropriate information accurately and quickly. With assistance from the robotic system, the material stock may be maintained. With the aid of BDA-ANNA, the mechanical system can keep the material stored. Automated systems with 5G capabilities can provide PE instructors with complete remote-control access with a 2-millisecond latency. These two clauses mandate that the RMPE supervise athletic events and physical activity. The suggested 5 G-enabled robotic systems for RMPE can manage all the resources effectively and efficiently with a low error rate. The advanced system and BDA-ANNA were put through a simulation exercise, demonstrating their independence in classifying and managing resources while reducing processing time. The experimental result improves a prediction ratio of 95.5 %, a learning ratio of 90.5%, an error rate of 92.3%, an Efficiency ratio of 96.6%, an Accuracy ratio of 92.5%, and performance ratio of 96.7%, a Movement Detection ratio of 90.7% compared to other methods.

Read More

Doi: https://doi.org/10.54216/FPA.130113

Vol. 13 Issue. 1 PP. 162-174, ()

A Framework Based on "One Belt, All Road" Strategy to Evaluate Regional Industry's Cluster Innovation Capacity

Sajad Ali Zearah , Maryam Ghassan Majeed , Mohammed Brayyich , Nabaa R. Wasmi Zaydan , Aqeel Ali , Marwan Qaid Mohammed6 , Venkatesan Rajinikanth

Expanding the industrial component through investment in R&D is a crucial objective of the region's current industrial strategy. Significant research and investment opportunities must complement the effectiveness of the region's industrial policy. Few studies have attempted to understand the interactions between inter-organizational clusters and the capacity to sustain those clusters; most studies on innovation capacity focus on the business level. This article suggests using the One Belt All Road (OBAR) strategic framework to assess regional industry's cluster innovation capacity (CIC) and international trade and investment. The cluster innovation capability was developed using a theoretical framework through qualitative textual assessment. As a result, information management, diffusion, and acquisition capacity are the three primary abilities that make up the cluster innovation capacity. The degree of investment effort in the region's industrial sectors and the factors influencing corporate innovation have been found to be correlated. The research highlighted obstacles and potential remedies for encouraging creative thinking and financial backing among regional manufacturers. Compared to the current system, the suggested system (OBAR) achieves superior results in accuracy (87.6%), system dependability (94.8%), the F-1 measure (87.1%), and error rate (8.1%).

Read More

Doi: https://doi.org/10.54216/FPA.130114

Vol. 13 Issue. 1 PP. 175-188, (2023)

Anticipating Student Engagement in Classroom through IoT-Enabled Intelligent Teaching Model Enhanced by Machine Learning

Raaid Alubady , Tamarah Alaa Diame , Hawraa Sabah , Hasan H. Jameel Mahdi , Munqith Saleem , Korhan Cengiz , Sahar Yassine

Machine learning provides several advantages for the usage of physical teaching technology. Machine learning is one of the major paths with connected technology and is part of a powerful frontier discipline that develops and influences overall education growth. To enhance student connection and assess student involvement in physical education, the Machine Learning assisted Computerized Physical Teaching Model (MLCPTM) has been developed in this work. The proposed MLCPTM intends to investigate and address contemporary technical physical education to create the ideal theoretical foundation for the growth of technology and current physical activity. Virtual reality (VR) technologies are used in the proposed MLCPTM to create a system for correcting physical education activity. The theory and category of machine learning were covered in this essay, along with a thorough analysis and examination of modern technological advancements in physical education. The challenges with machine learning in contemporary sports instructional technologies are also explained. Then, athletes should accelerate their knowledge of the movement techniques and heighten the training effect. According to the results of the experiments, the suggested MLCPTM model outperforms other existing models in terms of an effective learning ratio of 82.5 per cent, feedback ratio of 96 per cent, response ratio of 98.6 per cent, decision-making ratio of 96.3 per cent, and movement detection ratio of 79.84 per cent, the precision ratio of 97.8 per cent.

Read More

Doi: https://doi.org/10.54216/FPA.130115

Vol. 13 Issue. 1 PP. 189-202, (2023)

Construction of Improved Device-to-Device Communication in 5G Networks based on Deep Learning Techniques

Sajad Ali Zearah , Ahmed R. Hassan , Aqeel Ali , Saad Qasim Abbas , Tamarah Alaa Diame , Ahmed Mollah Khan , Mariok Jojoal

Device-to-Device (D2D) Communication promises outstanding data speeds, overall system capacity, and spectrum and energy efficiency without base stations and conventional network infrastructures, and these improvements in network performance sparked a lot of D2D research that exposed substantial challenges before being used to their fullest extent in 5G networks. This study suggests using Deep Learning-based Improved D2D communication (DLID2DC) in 5G networks to address these issues. Reprocessing resources between Cellular User Equipment (CUE) and D2D User Equipment (DUE) can increase system capacity without endangering the CUEs. The D2D resource allocation method allows for a flexible distribution of available resources across CUEs. In addition, several CUEs can consume the same pool of resources simultaneously. Researchers utilize various deep learning techniques to handle the difficulty of constructing D2D links and addressing their interference, mainly when using millimeter-wave (mmWave), to improve the performance of D2D networks. This research aims to increase system capacity by optimizing resource allocation using the suggested DLID2DC paradigm. The model uses Deep Learning methods to overcome interference issues and make D2D link building more efficient, especially in mmWave communication. The model uses Convolutional Neural Networks (CNNs) to learn and adapt to complicated D2D communication patterns, improving performance and dependability. The experimental findings show that, compared to other conventional approaches, the proposed DLID2DC model improves connection with lower end-to-end delay, energy efficiency, throughput, and efficient convergence time.

Read More

Doi: https://doi.org/10.54216/FPA.130116

Vol. 13 Issue. 1 PP. 203-220, (2023)