Fusion: Practice and Applications

Journal DOI

https://doi.org/10.54216/FPA

Submit Your Paper

2692-4048ISSN (Online) 2770-0070ISSN (Print)

A Novel Behavioral Monitoring based Trust Model for enhancing Edge Security using Adaptive Neuro-fuzzy Inference System

D. Jayakumar , K. Santhosh Kumar

The Internet of Things (IoT) is in a recent state of instability due to the flooding of virtual data. It is believed that IoT and cloud computing have met their maximum thresholds and loading them with data after this point will only deteriorate their performance. Hence, edge computing has been introduced to mitigate the processing burden of IoT. To meet the security demands of edge computing, we intend to combine the method of blockchain along with edge computing for a better solution. Accordingly, this paper proposes the introduction of a novel blockchain model that is based on artificial neural networks and trust estimation called the behavioral monitoring trust estimation model. Performance metrics such as accuracy, precision, recall, and F-measure are calculated under normal conditions and under the injection of attacks like false data injection, booting attack, and node capturing. The proposed behavioral monitoring trust classification model is compared with existing classifiers like Naive Bayes, K-nearest neighbor, Auto Encoder, Random Forest, and Support Vector Machine, and is found to have improved performance. Additional evaluation parameters like execution time, encryption time, storage cost, computational overhead, energy efficiency, and packet drop possibility are also calculated for the proposed model and compared with existing blockchain techniques of Bitcoin, Ethereum, Hyperledger, Direct and indirect trust model, and mutual trust chain based blockchain model. The proposed model achieved an accuracy of 95%, a precision score of 90%, a recall score of 94%, and an F-measure of 94% indicating superior performance.

Read More

Doi: https://doi.org/10.54216/FPA.170204

Vol. 17 Issue. 2 PP. 38-50, (2025)

Enhancing Cloud Computing Efficiency with Crocodile Optimization Algorithm: A Novel Approach to Distributed Workload and VM Management

Ibrahim A. Ibrahim , Warshine Barry , Narek Badjajian

Cloud computing has introduced itself as a mighty mechanism for delivering customers through the service model with on-demand, scalable, and instant access to computer resources. It will conduct effective load balancing and resource management, high importance so that the cloud system works with optimized performance and resource utilization. This gives a new strategy in load balancing and virtual machine (VM) control in cloud computing applied in the field using the Crocodile Optimization Algorithm (COA) for better performance. Inspired by crocodile hunting behaviors, the COA-based strategy is adopted to balance loads and manage VMs. This approach seeks to balance the number of the workload given to VMs with respect to the processing power of VMs and also the distribution of workload. It best uses resources in such a way that tasks are dynamically distributed to VMs in such a way that response time is at its minimum, and thus overall efficiency is enhanced in cloud computing. On the other hand, COA-based load balancing incorporates VM management techniques like migration and scaling to be adjustable in relation to the changing conditions of the workload. This allows dynamically adjusting the allocation of resources with respect to current demands, in such a way that assures optimal utilization of computational resources with high performance. The proposed approach was evaluated using simulations through CloudSim, one of the most adopted tools for simulating cloud computing. The COA effectively works are divided between the VM, which in turn will lead to better response time for the user request and improve cloud resource utilization. That is to mean, subsequent research would be some type of unique attempt in the area of load balancing and VM management in cloud computing, based on the Crocodile Optimization Algorithm. This approach improves efficient cloud computing through the balancing of load distribution, maximization of resource utilization, and lowering of response time.

Read More

Doi: https://doi.org/10.54216/FPA.170205

Vol. 17 Issue. 2 PP. 51-61, (2025)

Securing Drug Traceability: Block chain-Enhanced Privacy Protection and Anti-Counterfeit Measures in Pharmaceutical Supply Chains

Abdulrahman Mohammed Alshehri , Thamer Alhussain

The pharmaceutical industry encounters numerous challenges in the management of medications and ensuring their authenticity, as well as safeguarding sensitive information within the supply chain. Maintaining the integrity of drug manufacturing processes, transaction records, and patient data from unauthorized access or tampering is crucial. Any breach in security could undermine trust throughout the entire supply chain.  To mitigate these concerns, a multi-layered approach is employed. Initially, data encryption using QR codes with Attribute-Based Encryption provides a foundation for securing information. This is followed by an innovative strategy that combines Red Panda Optimization (RPO) Algorithm and Group Teaching Optimization algorithms (GTOA) to optimize encryption key selection. Finally, Multi-Party Computation (MPC) protocols along with Shamir's Secret Sharing enhances overall security measures. These procedures ensure that only authorized individuals have access to critical information essential for identifying counterfeit products and maintain confidentiality through Secure MPC verification without compromising sensitive details.

Read More

Doi: https://doi.org/10.54216/FPA.170206

Vol. 17 Issue. 2 PP. 62-78, (2025)

Machine Learning and Deep Learning Approaches for Detecting DDoS Attacks in Cloud Environments

Muhammad Asif Khan , Mohd Faizal Ab Razak , Zafril Rizal Bin M Azmi , Ahmad Firdaus , Abdul Hafeez Nuhu , Syed Shuja Hussain

Distributed Denial of Service (DDoS) attacks pose a significant threat to cloud computing environments, necessitating advanced detection methods. This review examines the application of Machine Learning (ML) and Deep Learning (DL) techniques for DDoS detection in cloud settings, focusing on research from 2019 to 2024. It evaluates the effectiveness of various ML and DL approaches, including traditional algorithms, ensemble methods, and advanced neural network architectures, while critically analyzing commonly used datasets for their relevance and limitations in cloud-specific scenarios. Despite improvements in detection accuracy and efficiency, challenges such as outdated datasets, scalability issues, and the need for real-time adaptive learning persist. Future research should focus on developing cloud-specific datasets, advanced feature engineering, explainable AI, and cross-layer detection approaches, with potential exploration of emerging technologies like quantum machine learning.

Read More

Doi: https://doi.org/10.54216/FPA.170207

Vol. 17 Issue. 2 PP. 79-97, (2025)

Optimization of Federated Learning Communication Costs through the Implementation of Cheetah Optimization Algorithm

Khalid Alleihaibi

Recently, Federated Learning (FL) has promptly gained aggregate interest owing to its emphasis on the data privacy of the user. As a privacy-preserving distributed learning algorithm, FL enables multiple parties to construct machine learning (ML) algorithms without exposing sensitive information. The distributed computation of FL may lead to drawn-out learning and constrained communication processes, which necessitate client-server communication cost optimization. The two hyperparameters that have a considerable effect on the FL performance are the number of local training passes and the ratio of chosen clients. Owing to training preference across different applications, it is challenging for the FL practitioner to manually choose these hyperparameters. Even though FL has resolved the problem of collaboration without compromising privacy, it has a transmission overhead because of repetitive model updating during training. Various researchers have introduced transmission-effective FL techniques for addressing these issues, but sufficient solutions are still lacking in cases where parties are in charge of data features. Therefore, this study develops an Optimization of Federated Learning Communication Costs through the Implementation of the Cheetah Optimization Algorithm (OFLCC-COA) technique. The OFLCC-COA technique is mainly applied for effectually optimizing the communication process in the FL to minimize the data transmission cost with the guarantee of enhanced model accuracy. The OFLCC-COA technique enhances the robust performance in unsteady network environment via the transmission of score values instead of large weights. Besides, the OFLCC-COA technique improves the communication efficiency of the network by transforming the form of data that clients send to servers. The performance analysis of the OFLCC-COA model occurs utilizing different performance measures. The simulation outcomes indicated that the OFLCC-COA model obtains superior performances over other methods in terms of distinct metrics

Read More

Doi: https://doi.org/10.54216/FPA.170208

Vol. 17 Issue. 2 PP. 98-110, (2025)

Enhancing Urban Connectivity: Dynamic Implementation and Integration of Multi-IRS Systems in Smart Cities

Israa Ali Al-Neami , Alza A. Mahmod , Alaa H Ahmed , Sergey Drominko , Erina Kovachiskaya

This is in preparation to stand out in urban connectivity to be used faster for Multi-Intelligent Reflecting Surfaces (Multi-IRS) in the latest thirst response. It will determine in advance the application of IRS technology for electromagnetic wave control, so that it is fine-tuned at full power to boost signal transmission and coverage across the urban areas in high-density population. It outlines flexible strategies on how to integrate the Multi-IRS system with both past and urban future establishments in a view of making connected connectivity. In reality, multi-IRS integrated with foundational smart city technologies such as IoT, 5G networks, AI, and others are nothing but a leap toward accomplishing unparalleled data flow and connectivity, both very essential for the modern urban ecosystem. Detailed case studies have demonstrated how multi-IRS systems can enable the breaking of traditional barriers in connectivity: more essentially, it can offer higher bandwidth, lower latency, and increased communication effectiveness. This development marks one of the serious steps under the concept of smart cities, where the data will be spreading and flowing without barriers between the multifarious urban systems and services. Lastly, the paper concludes with a future-looking view of urban connectivity underscored through continuous innovation and research of multi-IRS applications within the smart city landscape. The study points out the fact that dynamic IRS implementation creates an indispensable role in the pathway for upcoming development in smart city connectivity solutions, thus making a case for sustained collaborative efforts in research, policy formulating, and technological innovation for realizing the full potential of IRS technology in taming the connectivity challenges of contemporary urban settings. Performance comparison between a sequential beam search and a proposed model across varying Rician Factors, showing the proposed model's superior channel gain progression from -57 dB at 5 dB to -48 dB at 30 dB, outperforming the sequential method in environments with strong direct signals.

Read More

Doi: https://doi.org/10.54216/FPA.170209

Vol. 17 Issue. 2 PP. 111-122, (2025)

A Comprehensive Survey on AlexNet improvements and fusion techniques

Bahaa S. Rabi , Ayman S. Selmy , Wael A. Mohamed

Machine- and deep-learning techniques have been used in numerous real-world applications. One of the famous deep-learning methodologies is the Deep Convolutional Neural Network. AlexNet is a well-known global deep convolutional neural network architecture. AlexNet significantly contributes to solving different classification problems in different applications based on deep learning. Therefore, it is necessary to continuously improve the model to enhance its performance. This survey study formally defined the AlexNet architecture, presented information on current improvement solutions, and reviewed applications based on AlexNet improvements. This work also presents a simple survey based on a fusion of AlexNet with different machine-learning techniques for recent research in biomedical applications. In the survey results for about 11 research papers for both improvement and fusion techniques of AlexNet, it was clear that the fusion was the superior one with 99.72, and the improved one was 99.7%. In the conclusion and discussion section, there was a comparison between the improved techniques and fusion techniques of AlexNet and a proposal for future work on AlexNet development.

Read More

Doi: https://doi.org/10.54216/FPA.170210

Vol. 17 Issue. 2 PP. 123-146, (2025)

Enhancing Object Detection and Classification Using White Shark Optimization with Deep Learning on Remote Sensing Images

Reda Salama

Remote sensing (RS) object detection is extensively applied in the fields of civilian and military. The important role of remote sensing is to identify objects like planes, ships, harbours airports, etc., and then it can attain position information and object classification. It is of considerable importance to use RS images for observing the densely organized and directional objects namely ships and cars parked in harbours and parking areas. The object detection (OD) process involves object localization and classification. Due to its wide coverage and longer shooting distance, Remote sensing images (RSIs) have hundreds of smaller objects and dense scenes. Deep learning (DL), in particular convolution neural network (CNN), has revolutionized OD in different fields. CNN is devised to automatically learn the hierarchical representation of data, which makes them fit for feature extraction. Hence, the study proposes a new white shark optimizer with DL-based object detection and classification on RSI (WSODL-ODCRSI) method. The purpose of the WSODL-ODCRSI model is to classify and detect the presence of the objects in the RSI. To accomplish this, the WSODL-ODCRSI model uses a modified single-shot multi-box detector (MSSD) for the OD process. The next stage of OD is the object classification process, which takes place with the use of the Elman Neural Network (ENN) algorithm. The WSO algorithm is exploited as a parameter-tuning model for improving the object classification results of the ENN approach. The stimulated study of the WSODL-ODCRSI algorithm has been established on the benchmark data set and the outcomes underlined the promising performance of the WSODL-ODCRSI model on the object process of classification

Read More

Doi: https://doi.org/10.54216/FPA.170211

Vol. 17 Issue. 2 PP. 147-160, (2025)

Analysis of Objective Functions for Ribonucleic Acid Multiple Sequence Alignment Fusion Based on Harmony Search Algorithm

Mubarak Saif , Rosni Abdullah , Mohd. Adib Hj. Omar , Abdulghani Ali Ahmed , Nurul Aswa Omar , Salama A. Mostafa

Four kinds of smaller molecules known as ribonucleotide bases-adenine (A), cytosine (C), guanine (G), and uracil (U) combine to form the linear molecule known as ribonucleic acid (RNA). Aligning multiple sequences is a fundamental task in bioinformatics. This paper studies the correlation of different objective functions applying to RNA multiple sequence alignment (MSA) fusion generated by the Harmony search-based method. Experiments are performed on the BRAliBase dataset containing different numbers of test groups. The correlation of the alignment score and the quality obtained is compared against coffee, sum-of-pairs (SP), weight sum-of-pairs (WSP), NorMD, and MstatX. The results indicate that COFFEE and SP objective functions achieved a correlation coefficient (R²) of 0.96 and 0.92, respectively, when compared to the reference alignments, demonstrating their effectiveness in producing high-quality alignments. In addition, the sum-of-pairs takes less time than the COFFEE objective function for the same number of iterations on the same RNA benchmark.

Read More

Doi: https://doi.org/10.54216/FPA.170201

Vol. 17 Issue. 2 PP. 1-10, (2025)

The Detection of Glaucoma in Fundus Images Based on Convolutional Neural Network

Ali Yakoob Al-Sultan

Glaucoma is a common disease affecting the human retina, primarily caused by elevated intraocular pressure. Early intervention is crucial to prevent damage to the affected organs, which could lead to their dysfunction. This paper focuses on enhance diagnosis accuracy of the system to determine if a patient is at risk of developing glaucoma. In this paper a novel convolutional neural network (CNN) designed, specifically for the detection of glaucoma in fundus images. This architecture optimizes for the unique characteristics of fundus imagery, enhancing detection accuracy, and also compiled a large and diverse dataset of fundus images, crucial for training and validating our CNN model. The dataset includes a significant number of images with detailed annotations, ensuring robust model training. In addition, implemented sophisticated image preprocessing methods to enhance the quality of the fundus images. These techniques, including noise reduction and contrast enhancement, significantly improve the input data quality for the CNN. The system operates in three stages. First, it preprocesses the image by cropping, enhancing, and resizing it to a consistent 256×256 pixels. Next, it employs an advanced feature extraction to analyses key features of the optic disc and optic cup in retinal images. Finally, the Soft-Max function classifies the images, identifying those with glaucoma and distinguishing them from normal eye samples. The model's performance was thoroughly evaluated using various metrics like accuracy, Sensitivity, specificity, and the area under the curve are metrics used to evaluate the performance of a diagnostic test. Sensitivity measures the test's ability to correctly identify positive cases, specificity assesses its accuracy in identifying negative cases, and the area under the curve indicates the overall effectiveness of the test across different thresholds. The results achieved by the proposed system were thoroughly analyzed, revealing a high accuracy rate in glaucoma classification, reaching 99%.

Read More

Doi: https://doi.org/10.54216/FPA.170202

Vol. 17 Issue. 2 PP. 11-23, (2025)

Computer Aided Brain Tumor Diagnosis using Coati Optimization Algorithm with Explainable Artificial Intelligence Approach

Wajdi Alghamdi

Brain tumors (BT) are a difficult and dangerous medical condition, and the accurate and early analysis of these tumors is crucial for suitable treatment. Explainability in clinical image diagnosis role a vital play in the correct analysis and treatment of tumors that supports medical staff's optimum understanding of the image analysis performances rely upon deep methods. Artificial intelligence (AI), in certain deep neural networks (DNNs) has attained remarkable outcomes for clinical image analysis in many applications. However, the need for explainability of deep neural approaches has been assumed that major restriction before executing these approaches in medical practice. Explainable AI, or XAI, is a vital module in this context as it supports medical staff and patients in understanding the AI's decision-making model, enhancing trust and transparency. It leads to optimum patient care and performance but making sure that medical staff can make learned decisions depends on AI-driven insights. Therefore, this study develops a novel Computer-Aided Brain Tumor Diagnosis using Coati Optimization Algorithm with an Explainable Artificial Intelligence (CABTD-COAXAI) approach. The purpose of the CABTD-COAXAI technique is to exploit XAI and hyperparameter-tuned deep learning (DL) approaches for automated BT analysis. To accomplish this, the CABTD-COAXAI technique follows a Gaussian filtering (GF) based noise removal process. Besides, the CABTD-COAXAI technique utilizes the EfficientNetB7 methods for the feature extraction process. Additionally, the hyperparameter tuning of the EfficientNetB7 method is performed by the use of COA. Furthermore, the classification of the BT process can be performed by the usage of a convolutional autoencoder (CAE). Finally, the CABTD-COAXAI system combines the XAI method named LIME to effectively understand and explainability of the black-box model for automated BT diagnosis. The simulation result of the CABTD-COAXAI technique has been tested on a benchmark BT database. The extensive outcomes inferred that the CABTD-COAXAI method reaches superior performance over other models in terms of different measures

Read More

Doi: https://doi.org/10.54216/FPA.170203

Vol. 17 Issue. 2 PP. 24-37, (2025)

Elevating Diagnostic Accuracy: Advanced GAN-Enhanced High-Resolution Medical Imaging for Superior Disease Detection

Vathana .D , Babu .S

Advanced imaging in medical has become crucial in the early identify diseases because they reveal the important structural features of the human body. But it is almost impossible to get such high resolution images in real life situation due to the factors such as image capture and processing equipment, and environmental factors that affect the outcome of the image. This work proposes a sub-type of GAN that is used in enhancement of images particularly in medical fields. The generator of the Med-GAN extracts a high-resolution image from a low-resolution one with the help of novel features learned by the model. The approach of reconstructing high resolution from multiple parallel streams of lower resolution employs deconvolution algorithms with multiple scale fusions that produce better high resolution representations as compared to the technique of bilinear interpolation. The performances of the proposed Med-GAN are tested on two publicly available COVID-19 CT datasets and one private medical image dataset which shows that the proposed method outperforms the existing methods in performance comparisons. Consequently, for PSNR, the score improves from 24.103 dB corresponding to the Initial Approach of the “BRaTS (FLAIR)” dataset to 25.496 dB for the Proposed Method; whereas for SSIM the score increases from 0.782 to 0.812.se types of high-resolution images are usually impossible to get due to limits in imaging devices, environmental conditions, and human factors. This work proposes the Med-GAN: an Enhanced Super-Resolution Generative Adversarial Network tuned for medical image enhancement. The Med-GAN generator learns high-resolution representations from low-resolution images via advanced feature extraction methods. Deconvolution algorithms with multi-scale fusions recover better high-resolution representations from multiple parallel streams of lower resolutions in this approach compared to traditional bilinear interpolation methods. Evaluated on two publicly available COVID-19 CT datasets and one custom medical image dataset, the proposed Med-GAN significantly outperforms existing techniques in performance comparisons. In particular, PSNR rises from 24.103 dB for the "BRaTS (FLAIR)" dataset in the Initial Approach to 25.496 dB in the Proposed Method, while SSIM increases from 0.782 to 0.812. If that is the case then it could be said that the solution of the proposed Med-GAN is one of the most realistic means for improving the quality of medical images and therefore contributes to better diagnostics of diseases

Read More

Doi: https://doi.org/10.54216/FPA.170214

Vol. 17 Issue. 2 PP. 186-196, (2025)

A Comparative Analysis of Feature Extraction Techniques for Fake Reviews Detection

Zahraa Fadhel , Hussien Attia , Yossra Hussain Ali

The current Internet era is characterized by the widespread circulation of ideas and viewpoints among users across many social media platforms, such as microblogging sites, personal blogs, and reviews. Detecting fake reviews has become a widespread problem on digital platforms, posing a major challenge for both consumers and businesses. Due to the ever-increasing number of online reviews, it is no longer possible to manually identify fraudulent reviews. Artificial intelligence (AI) is essential in addressing the problem of identifying fake reviews. Feature extraction is a crucial stage in detecting fake reviews, and successful feature engineering techniques can significantly improve the accuracy of opinion extraction. The paper compares five feature extraction methods for multiple opinion classification using Twitter on airline and Borderland game reviews. FastText with X-GBoost classifier outperformed all other techniques, achieving 94.10% accuracy on the airline dataset and 100% accuracy in Borderland game reviews.

Read More

Doi: https://doi.org/10.54216/FPA.170212

Vol. 17 Issue. 2 PP. 161-172, (2025)

Advancing Early Cardiovascular Disease Prediction Model using Improved Beluga Whale Optimization with Ensemble Learning via ECG Signal Analytics

Hassan A. Alterazi

Cardiovascular Disease (CVD) mainly affects the blood vessels and heart such as coronary artery disease, stroke, and heart failure. Early recognition is vital for on-time intervention and enhanced patient results. CVD is a major issue in society nowadays. When compared to the non-invasive model, the electrocardiogram (ECG) is the most effective approach for identifying cardiac defects. However, ECG analysis needs an experienced person with high knowledge and basically, it is a time-consuming task. Emerging a new technique to identify the disease at an early stage increases the quality and efficacy of medicinal care. A state-of-the-art technologies like machine learning (ML) and artificial intelligence (AI) have been gradually being used to increase the efficacy and accuracy of CVD recognition, permitting for faster and more exact analysis, and finally contributing to superior management and prevention tactics for CV health. This research paper designs an Early Cardiovascular Disease Prediction using an Improved Beluga Whale Optimizer with Ensemble Learning (ECVDP-IBWOEL) approach via ECG Signal Analytics. The main intention of the ECVDP-IBWOEL system is to forecast the presence of CVD at the early stage using EEG signals. In the ECVDP-IBWOEL method, the primary phase of data preprocessing is initially implemented to convert the input data into a well-suited layout. Also, the ECVDP-IBWOEL technique follows an ensemble learning (EL) process for CVD detection comprising three models namely long short-term memory (LSTM), deep belief networks (DBNs), and stacked autoencoder (SAE). Finally, the IBWO algorithm-based hyperparameter tuning process takes place which can boost the classifier results of the ensemble models. To certify the enhanced results of the ECVDP-IBWOEL system, an extensive experimental study is made. The experimentation outcomes stated that the ECVDP-IBWOEL system underlines promising performance in the CVD prediction process

Read More

Doi: https://doi.org/10.54216/FPA.170213

Vol. 17 Issue. 2 PP. 173-185, (2025)