Unmanned aerial vehicles (UAVs) and swarms unmanned aerial vehicles (UAVs) have recently shown themselves capable of providing dependable and reasonably priced solutions for a variety of real-world issues. UAVs provide a wide range of services due to their autonomy, adaptability, mobility, and communications interoperability. Despite the fact that UAVs are frequently used to facilitate ground communications, data exchanges inside those networks are susceptible to security threats due to the ease with which radio and Wi-Fi signals can be hacked. However, there are many ways to stop cyberattacks. One of the potential methods to enhance user privacy, data security, and authentication—especially in peer-to-peer UAV networks—may be blockchain technology, which has lately gained prominence. Using the benefits of blockchain technology, several entities can communicate in a decentralized. This paper uses some supporting technologies to provide a thorough overview of privacy and security integration in blockchain-assisted swarm and UAV networks. For this goal, this work is compared to earlier research to find effective solutions, and blockchain technology is integrated to improve the capacity of swarm UAV networks and communication to move, manage, and exchange data. We conclude by talking about open research issues, the limitations of the UAV standards as they stand right now, and possible research paths in the future. This comprehensive review is an invaluable tool to know study and analyze a good number of reviews and research papers in recent years to overcome obstacles and find appropriate solutions for integrating UAV swarms with block chain Technology.
Read MoreDoi: https://doi.org/10.54216/FPA.180101
Vol. 18 Issue. 1 PP. 01-23, (2025)
The lack of practical teaching tools, such as a robotic arm, hinders students' understanding of complex concepts in robotics courses, where hands-on experience is essential for effective learning. This study introduced a 6DOF Robotic Arm as a teaching aid to address this issue, evaluating its impact through an experimental study with 30 computer science students. The findings revealed that the robotic arm effectively enhanced both basic and advanced Arduino programming skills, with students who used it performing better and expressing higher satisfaction than those who did not. The study also identified gaps in hardware control comprehension, leading to software development that could further aid in mastering programming concepts. The paper concludes with a discussion of the potential of the robotic arm as a valuable educational tool and its implications for future research and practical applications.
Read MoreDoi: https://doi.org/10.54216/FPA.180102
Vol. 18 Issue. 1 PP. 24-34, (2025)
Throughout a Wireless Sensor Network (WSN), information collected from the environment is continuously transmitted from one node to the next, and then the main collector or server receives and processes it. With the growth of a network, data transfers within the network also grow dramatically. Medical images increase traffic on a network if they are transmitted. An interlayer transmission protocol (WSN) was developed for this study. Pixels are used to create the medical image using the protocol. A gray-level medical image with 512x512 pixels provided by Brain was used to conduct the study. Medical image size is reduced from 256 KB to 192 KB, providing a 25% advantage. A study found SSIM of 51, 1365 and PSNR of 0,9976 for the structural similarity ratio (SSIM). The Advanced Encryption Standard (AES) encryption algorithm safeguards data during the transfer. By creating such a layer, transmissions became safer. In the WSNs, 12.5% and 25% of the data transfer has been reduced based on the information obtained from the study without changing the medical image.
Read MoreDoi: https://doi.org/10.54216/FPA.180103
Vol. 18 Issue. 1 PP. 35-48, (2025)
Recently, the complex network has become popular use as it can transfer huge amounts of multimedia, text, ideas, and other information, encouraging many participant connections. Social media is one of these networks that make the most connections. Predicting the formation or dissolution of links between nodes presents a problem for social network analysis researchers. Since social networks are dynamic, this task is exciting as it may also forecast lost network links with less information. On the other way, current link prediction methods use simply node similarity to find links. This study proposes a new technique that relies on node attributes and similarity measures. Nodes are labeled by their centrality and similarity. The network's edges are negative and positive samples. A well-defined dataset for link prediction comprises the features of the nodes at the edges labeled either positive or negative. The dataset is passed to multiple machine learning classifiers. On several real-world networks. The experiments conducted during the research show that Gradient Boosting gave the highest accuracy of 99% compared with other methods.
Read MoreDoi: https://doi.org/10.54216/FPA.180104
Vol. 18 Issue. 1 PP. 41-55, (2025)
Monitoring and analyzing athletes' jumps system using Electromyography (EMG) signals based on Virtual Instruments (LabVIEW) is presented in this paper. This system was prototyped using the virtual instrument workbench (LabVIEW) to display the jumping pattern. In Jump analysis hardware (JA-H/W), there are sensory boards, ultrasonics, and wireless communication systems. To measure the minimum foot clearance (MFC) and orientation, there have been two types of systems used to simulate Jump Analysis Software Ultrasonic (JAS-UltSnc) as well as Inertial Measurement Unit (JAS-IntMeUnt). Combining JAS-UltSnc with JAS-IntMeUnt provided a complete solution with error correction. LabVIEW is used to display the jump patterns generated by the system and analyze the jump patterns of the athlete.
Read MoreDoi: https://doi.org/10.54216/FPA.180105
Vol. 18 Issue. 1 PP. 56-65, (2025)
The collected information from the environment in WSN continuously sends from one node to another until it reaches the main collector or server, where processing is done. The transferred data volume will be greater when the network grows. Medical images will also contribute to network traffic. To alleviate this challenge, this research has developed an interlayer transmission protocol for WSNs. This protocol uses the construction of medical images with pixel-based data. In the analysis, a gray-scale medical image 512x512 in size, provided by Brain, is utilized. The image was compressed by the protocol from 256 KB to 192 KB with a percentage of 25%. As a result, the structural similarity index measure showed the SSIM at 51.1365, while the PSNR is at 0.9976; therefore, the quality of the medical image remains unchanged. The protocol uses the AES encryption method for strong data protection to improve security during transmission. Results show that this protocol reduces data transmission in WSNs by 12.5 to 25% without affecting the integrity of the medical image, which is indicative of the efficiency of the protocol in enhancing network performance while ensuring data safety.
Read MoreDoi: https://doi.org/10.54216/FPA.180106
Vol. 18 Issue. 1 PP. 66-75, (2025)
Human civilization encompasses all that humans have created, both materially and morally, within a specific time and place. Thus, building highway extensions represents a significant addition to the material aspects of civilization. Highways are a crucial component of human development, affecting societies in social, economic, environmental, urban, and cultural ways. Connecting Erbil with Koya via a highway is expected to affect the populations of both cities and their surrounding areas. This paper examines the role of highways in societal development, with a particular focus on Koya. We have demonstrated the importance of highway design through mathematical models using modern speed parameters, fuzzy logic, and control methods. Additionally, we proposed a method for managing highway speeds through radar and remote sensing technologies. The paper highlights the inevitable societal progress resulting from the Koya-Erbil highway connection.
Read MoreDoi: https://doi.org/10.54216/FPA.180107
Vol. 18 Issue. 1 PP. 76-89, (2025)
Improving the Extended Kalman Filter's (EKF) State of Charge (SOC) prediction for EV battery packs is the primary goal of this section. Optimised batteries management procedures rely on SOC estimate that is both accurate and reliable. The EKF is a popular tool for estimating nonlinear states, but how well it works relies heavily on which noise coefficient matrices are used (Q and R). Experimental testing and other conventional approaches of calibrating these matrix systems are extremely costly and time-consuming. In order to tackle this, the section delves into the integration of four state-of-the-art metaheuristic optimisation methods: GA, PSO, SFO, and HHO. By minimising the mean square error (MSE) among the real and expected SOC, these techniques optimise the Q and R matrices. When looking at preciseness, converging speed, and resilience, SFO-EKF comes out on top in both static and dynamic comparisons. By greatly improving the reliability of SOC estimations, the numerical results show that SFO-EKF obtains the lowest MSE & RMSE. This study advances electric car batteries by providing a realistic scheme for combining optimisation methods with EKF to offer highly effective and exact SOC estimates. When as opposed to TR-EKF, GA-EKF, PSO-EKF, and HHO-EKF, the SFO-EKF approach shows the best accuracy, with an improvement of over 94%. This is a result of the suggested model's exceptional efficiency in SOC estimates.
Read MoreDoi: https://doi.org/10.54216/FPA.180108
Vol. 18 Issue. 1 PP. 90-103, (2025)
This research illustrates how dynamic task balancing and data sharing may improve distributed data processing. The technology handles parallel processing system difficulties with huge datasets by minimizing resource utilization, time complexity, and output. We modify the workload on the fly after splitting to ensure that all processing units receive equal work. One last optimization phase optimizes job distribution to maximize system efficiency. We test the solution for latency, speed, scalability, resource utilization, fault tolerance, and synchronization overhead. Results reveal that the new strategy outperforms existing ones in every regard. It features the lowest latency, quickest production, and highest growth potential. The approach handles mistakes well, divides data effectively, and syncs everything at a cheap cost. These properties make it ideal for real-time data processing and fast-growing applications. Future study will concentrate on flexible splitting strategies, fault tolerance mechanisms, and predictive analytics machine learning models. These modifications will improve real-time data handling.
Read MoreDoi: https://doi.org/10.54216/FPA.180109
Vol. 18 Issue. 1 PP. 104-115, (2025)
The world is witnessing a boom in the digital age. Digital shops have literally landed into our homes. Almost any required product can now be purchased online via websites or mobile apps without having to step out. Due to online shopping, many customers rely on online reviews from other customers before making a purchase. Customer reviews are gaining more and more importance as they play a probably vital role in the sale and purchase of a product. Customer reviews also provide firsthand feedback coming directly from the customers themselves; this can benefit even the sellers in improving future sales. Analyzing the reviews can provide probable causes for failure or success of a product. Henceforth, the current paper presents the sentiment analysis of the reviews to better understand the feelings expressed by the customers. The very popular and widely used mobile phones were chosen as the product and Amazon was chosen as the digital seller for the current study. Initially, this work began with data preprocessing. Followed by data preprocessing, Bow and n-grams word embedding have been used to represent the clean reviews in vector representation, and then the features were derived. Finally, the performance of supervised machine learning classifiers such as Decision Tree, Naive Bayes, Random Forest, and SVM was empirically evaluated through accuracy, recall, f1-score, and precision. The results of empirical evaluation revealed that the Random Forest Classifier shows best performance with 97.48% accuracy.
Read MoreDoi: https://doi.org/10.54216/FPA.180110
Vol. 18 Issue. 1 PP. 116-129, (2025)
Biometrics has reached an important place in the field of authentication for both financial transactions and document verification. Signatures can be broadly classified into online and offline types, depending on how they are acquired. Captured through devices like tablets and digital pens, online signatures contain rich features concerning position, velocity, and acceleration; hence, they offer a better resistance to forgery compared to offline, more traditionally taken signatures. The review summarized the current research in online signature verification systems. There are methodologies and techniques deployed for feature extraction, data pre-processing, and classification. The main stages reviewed within the verification process are about data acquisition, including the use of several publicly available databases like DEEPSIGN, SVC2004 and MCYT-100. Wavelet transforms and Fourier analysis are discussed as a number of methods employed for feature extraction, showing good results about signature dynamics. This review follows the SLR approach for analysing and synthesizing relevant studies published between 2017 and 2024. This review uses PRISMA guidelines for the selection of studies, hence making the results methodologically rigorous and unbiased. The paper identifies commonly used algorithms, including CNN, RNN, and DTW, and examines popular signature databases by outlining their characteristics and relevance to system performance. The insights from this review will help in pointing towards the future ahead in online signature verification systems through emphasizing deep learning-based techniques along with realistic challenges.
Read MoreDoi: https://doi.org/10.54216/FPA.180111
Vol. 18 Issue. 1 PP. 130-144, (2025)
This paper presents an optimized framework for detecting SMS spam using advanced machine learning algorithms and natural language processing (NLP) techniques. Two datasets, the Filtering Mobile Phone Spam Dataset and the SMS Spam Collection Dataset, were utilized to evaluate the performance of various classifiers, including Multinomial Naive Bayes, K-Nearest Neighbors, Support Vector Classifier, Decision Trees, and AdaBoost. The methodology encompasses comprehensive data preprocessing steps, such as tokenization, stopword removal, and text normalization, followed by feature extraction using TF-IDF and Bag-of-Words models. The classifiers’ performances were evaluated using accuracy, precision, recall, and F1-score, alongside cross-validation techniques. Results indicate that Support Vector Classifier and AdaBoost consistently achieved superior accuracy in distinguishing between spam and ham messages. The study underscores the importance of data preprocessing and model optimization in enhancing spam detection accuracy, offering valuable insights for improving SMS filtering systems in cybersecurity applications.
Read MoreDoi: https://doi.org/10.54216/FPA.180112
Vol. 18 Issue. 1 PP. 145-182, (2025)
In this article, we use machine learning approaches to give a thorough investigation into the prediction of cardiac illnesses and strokes. The Stroke Prediction Dataset and the Heart Failure Prediction Dataset are the two datasets that we use. Our objective is to maximize accuracy and minimize Mean Absolute Error (MAE) and Mean Squared Error (MSE) in order to enhance predictive performance. We use a variety of machine learning methods, such as Random Forests, Naive Bayes, Decision Trees, and k-Nearest Neighbors (KNN). We also use Artificial Neural Networks (ANN) and Multi-Layer Perceptrons (MLP) as deep learning models. We use oversampling approaches to rectify the imbalance in classes. For hyperparameter tweaking, we also use Grid Search and k-Fold Cross Validation. Our goal is to deliver valuable insights into early detection and preventive measures through comprehensive testing and assessment for prevention of strokes and heart diseases.
Read MoreDoi: https://doi.org/10.54216/FPA.180113
Vol. 18 Issue. 1 PP. 182-203, (2025)
Excessive use of fertilizers harms the environment and disrupts plant habitats, while also raising costs for farmers. Proper timing and amounts of nutrients are crucial for plant health and environmental balance. The greenness of rice leaves indicates their chlorophyll and nutrient levels. Agronomy studies show rice plants need 10 nutrients, including primary ones like Nitrogen (N), Phosphorus (P), and Potassium (K), and secondary ones like Iron (Fe), Manganese (Mn), Copper (Cu), Zinc (Zn), Boron (B), Molybdenum (Mo), and Chlorine (Cl). Leaf nitrogen concentration (LNC) is highly correlated with chlorophyll content. There are several tools on LEAF+ to measure it, such as leaf color (LCC), SPAD, chlorophyll or nitrogen. Since these tools are cost-effective and not available to all farmers, LCC offers farmers the ability to estimate plant nitrogen needs in real-time for efficient fertilizer use and increased rice yield. Notable innovation in agriculture is the Leaf Color Chart (LCC), developed by Japanese experts. It measures chlorophyll levels in rice plants and aids in nitrogen management without harming the plant. Today, LCC is used globally to improve production efficiency and optimize nitrogen application rates. The remaining 2 major nutrients potassium and phosphorus can also be measured by experimentally expanding the available database of LCC, as has been done in the two models developed in this research paper.
Read MoreDoi: https://doi.org/10.54216/FPA.180114
Vol. 18 Issue. 1 PP. 204-225, (2025)
News agencies connect global events to local communities. It plays a pivotal role in influencing public opinion. Thus, the necessity arises to recognize news article’s sentiment. The purpose of this paper is to analyze sentiment for English and Arabic news articles in terms of positivity, negativity, or neutrality. Analyzing the articles of Arabic and English news can be challenging from the perspective of morphology. In this paper, we introduce 4 Machine Learning methods, including Logistic Regression (LR), k Nearest Neighbors (KNN), Random Forests (RF) and Naive Bayes (NB), with the TF-IDF as the feature extraction. The study was validated using 2 data sets (BBC, SANAD Arabic news), and two learning models (Hold out and 10-fold cross-validation). The evaluation was based on; Accuracy (ACC), Precision (PREC), Recall (REC), F1-score (F1), and The Matthews Correlation Coefficient (MCC) where it shows an outstanding performance for ML on a 10-fold strategy. The experiments provided in the paper indicated that the proposed ML models achieved the best results.
Read MoreDoi: https://doi.org/10.54216/FPA.180115
Vol. 18 Issue. 1 PP. 226-239, (2025)
Biometric verification has grown into critical to privacy across areas such as finance and safe accessing services. The present study addresses the utilization of techniques for deep learning, namely convolutional neural networks (CNNs), to boost both the precision and dependability of biometric authentication. Researchers explore the effectiveness of these algorithms on collections containing genuine and forged banknote photos, taking into account information collecting obstacles such as operator condition changes and ambient conditions. The novelty shows an incredible proficiency in classification of 100%, with clarity, recall, and F1-scores of 1.00 across the two categories, demonstrating that the representation is excellent at discerning amongst legitimate and replica materials. Further, researchers investigate the effects of different design variables on efficiency and precision. This investigation provides important insights into merging deep learning with biometric data, laying the basis for future safe authorization developments.
Read MoreDoi: https://doi.org/10.54216/FPA.180116
Vol. 18 Issue. 1 PP. 240-248, (2025)
This paper focuses on the training, evaluation and development of named entity recognition (NER) models designed for Islamic hadiths in Arabic Utilizing the Hadith Noor dataset, the study uses the BIO (Basic, In, Out) tagging scheme to classify words or tokens in NER tasks and the segmentation of the text into individual tokens. The right-skewed distribution revealed by examining the lengths of the Islamic hadiths revealed a right-skewed distribution, indicating that shorter texts are more common. Texts less than 100 words were most prevalent, followed by texts between 100 and 200 words, while texts longer than 200 words were rare. The dataset identifies eight types of entities, such as common names among narrators and locations. The study by training the three models AraBERT, LSTM and the hybrid model AraBERT-LSTM on Arabic text processing respectively, the hybrid model showed a performance, efficiency and accuracy of 0.981, outperforming the rest of the models, confirming its worth and reliability in NER tasks for natural language in Arabic, especially Islamic hadiths, which opens the way for exploring further investigations for future research in natural language processing.
Read MoreDoi: https://doi.org/10.54216/FPA.180117
Vol. 18 Issue. 1 PP. 249-260, (2025)
Ship Ad Hoc Networks (SANETs) are an integral part of modern maritime communication and shipping, characterized by dynamic topology and heavy traffic. Accurate node localization in SANETs is of great importance to ensure effective communication, security, and operational decisions. Traditional clustering algorithms, such as Fuzzy C-Means (FCM) and Possibilistic Fuzzy C-Means (PFCM), struggle with the dynamic and collaborative nature of SANETs, being sensitive to noise, outliers, and node distribution of rapidly changing. In this paper, a new clustering algorithm, the Dynamic Weighted Gradient-Based Possibilistic using Fuzzy C-Means (DWGB-PFCM), is specially designed to address the limitations of traditional methods in dynamic SANETs. The DWGB-PFCM contains dynamic weighted distances, flexible membership and uniqueness functions, and enhanced objective functions to improve robustness, adaptability, and efficiency of the cluster. Detailed data processing from the National Buoy Data Center (NDBC) combines spatial environmental parameters such as wind speed, atmospheric pressure, and wave characteristics to simulate real-world ocean challenges. Experimental results show that DWGB-PFCM outperforms traditional methods and separation measurements, with PFCM improving by 15.8%, decreasing by 22.2% in separation entropy, and decreasing by 32.1% in RMSE. In addition, DWGB-PFCM achieves a 15.0% improvement in computational efficiency over FCM. This research lays the foundation for further innovations in clustering algorithms designed for dynamic environments.
Read MoreDoi: https://doi.org/10.54216/FPA.180118
Vol. 18 Issue. 1 PP. 261-268, (2025)
This paper aims at presenting an overview of the most popular outlier detection methods that can be used in the retail sector to solve such important problems as fraud, inventory issues, and untypical customer behavior. The techniques discussed in this paper include the conventional statistical methods such as Z-score, Mahalanobis Distance, and Elliptic Envelope and the advanced machine learning methods such as Local Outlier Factor (LOF), Isolation Forest, and DBSCAN. Each method is discussed in detail and the advantages and disadvantages of each are evaluated in relation to different retail scenarios. The primary contribution of this study is the new approach to use Artificial Neural Networks (ANN) for tuning contamination parameters in the Elliptic Envelope model, which makes the anomaly detection more accurate and efficient. Furthermore, the study also depicts the application of min-max scaling for normalizing the features where it helps in reducing the effect of outliers and thus improves the model performance. The results show that the integration of the statistical and machine learning methods is very useful for the real-time detection of anomalies particularly in the ever-changing environment of the retail industry. This research presents a practical insight and new methodological approaches that may be useful for researchers and practitioners who develop outlier detection systems. The outcomes of this study have the potential of enhancing data fusion quality, workflow, and decision-making in the context of retailing.
Read MoreDoi: https://doi.org/10.54216/FPA.180119
Vol. 18 Issue. 1 PP. 269-287, (2025)
Deepfake is a technology employed in making definite videos, which are operated utilizing an artificial intelligence (AI) model named deep learning (DL). Deepfake videos were normally videos that cover activities grabbed by definite people but with another individual's face. Substitute of people appearances in videos utilizing the DL model. The technology of Deepfake permits humans to operate videos and images utilizing DL. The outcomes from deepfakes are challenging to differentiate utilizing normal vision. It is a combination of the words DL and fake, and it mostly denotes material shaped by deep neural networks (DNNs), which is a subclass of machine learning (ML). Deepfake denotes numerous modifications of face models, and integrates innovative technologies, with computer vision and DL. The detection of a deepfake model can be assumed as a dual classification procedure that can be categorized as the original or deepfake class. It works by removing features from the videos or images that is employed to distinguish between original and deepfake content. Therefore, this study proposes Leveraging Pufferfish Optimization and Deep Belief Network for an Enhanced Deepfake Video Detection (LPODBN-EDVD) technique. The LPODBN-EDVD technique intends to detect fake videos utilizing the DL model. In the presented LPODBN-EDVD technique, the data preprocessing stages include splitting the video into frames, face detection, and face cropping. For the process of feature extraction, the EfficientNet model is exploited. Besides, the deep belief network (DBN) classifier can be executed for deepfake video detection. Finally, the pufferfish optimization algorithm (POA) is employed for the optimal hyperparameter selection of the DBN classifier. A wide range of simulations was involved in exhibiting the promising results of the LPODBN-EDVD method. The experimental analysis pointed out the enhanced performance of the LPODBN-EDVD technique compared to recent approaches
Read MoreDoi: https://doi.org/10.54216/FPA.180120
Vol. 18 Issue. 1 PP. 288-303, (2025)
Compiler optimization is crucial in improving program performance by improving execution speed, reducing memory usage, and minimizing energy consumption. Nevertheless, modern compilers, such as LLVM, with their numerous optimization passes, present a significant challenge in identifying the most effective sequence for optimizing a program. This study addresses the complex problem of determining optimal compiler optimization sequences within the LLVM framework, which encompasses 64 optimization passes, causing in an immense search space of 264264. Identifying the ideal sequence for even simple code can be an arduous task, as the interactions between passes are intricate and unpredictable. The primary objective of this research is to utilize machine-learning techniques to predict effective optimization sequences that outperform the default -O2 and -O3 optimization flags. The methodology involves generating 2,000 sequences per program and picking the one that achieves the shortest execution time. Three machine learning models—K-Nearest Neighbor (KNN), Decision Tree (DT), and Feedforward Neural Network (FFNN)—were employed to predict the optimization sequences based on features extracted from programs during execution. The study used benchmarks from Polybench, Shootout, and Stanford suites, each with varying problem sizes, to validate the proposed technique. The results demonstrate that the KNN model produced optimization sequences with superior performance compared to DT and FFNN. On average, KNN achieved execution times that were 2.5 times faster than those achieved using the O3 optimization flag. This research contributes to the field by programming the process of selecting optimal compiler sequences, which significantly reduces execution time and eliminates the need for manual tuning. It highlights the potential of machine learning in compiler optimization, offering a robust and scalable approach to improving program performance and setting the foundation for future advancements in the domain.
Read MoreDoi: https://doi.org/10.54216/FPA.180121
Vol. 18 Issue. 1 PP. 304-320, (2025)