The Internet of Things (IoT) is in a recent state of instability due to the flooding of virtual data. It is believed that IoT and cloud computing have met their maximum thresholds and loading them with data after this point will only deteriorate their performance. Hence, edge computing has been introduced to mitigate the processing burden of IoT. To meet the security demands of edge computing, we intend to combine the method of blockchain along with edge computing for a better solution. Accordingly, this paper proposes the introduction of a novel blockchain model that is based on artificial neural networks and trust estimation called the behavioral monitoring trust estimation model. Performance metrics such as accuracy, precision, recall, and F-measure are calculated under normal conditions and under the injection of attacks like false data injection, booting attack, and node capturing. The proposed behavioral monitoring trust classification model is compared with existing classifiers like Naive Bayes, K-nearest neighbor, Auto Encoder, Random Forest, and Support Vector Machine, and is found to have improved performance. Additional evaluation parameters like execution time, encryption time, storage cost, computational overhead, energy efficiency, and packet drop possibility are also calculated for the proposed model and compared with existing blockchain techniques of Bitcoin, Ethereum, Hyperledger, Direct and indirect trust model, and mutual trust chain based blockchain model. The proposed model achieved an accuracy of 95%, a precision score of 90%, a recall score of 94%, and an F-measure of 94% indicating superior performance.
Read MoreDoi: https://doi.org/10.54216/FPA.170204
Vol. 17 Issue. 2 PP. 38-50, (2025)
Cloud computing has introduced itself as a mighty mechanism for delivering customers through the service model with on-demand, scalable, and instant access to computer resources. It will conduct effective load balancing and resource management, high importance so that the cloud system works with optimized performance and resource utilization. This gives a new strategy in load balancing and virtual machine (VM) control in cloud computing applied in the field using the Crocodile Optimization Algorithm (COA) for better performance. Inspired by crocodile hunting behaviors, the COA-based strategy is adopted to balance loads and manage VMs. This approach seeks to balance the number of the workload given to VMs with respect to the processing power of VMs and also the distribution of workload. It best uses resources in such a way that tasks are dynamically distributed to VMs in such a way that response time is at its minimum, and thus overall efficiency is enhanced in cloud computing. On the other hand, COA-based load balancing incorporates VM management techniques like migration and scaling to be adjustable in relation to the changing conditions of the workload. This allows dynamically adjusting the allocation of resources with respect to current demands, in such a way that assures optimal utilization of computational resources with high performance. The proposed approach was evaluated using simulations through CloudSim, one of the most adopted tools for simulating cloud computing. The COA effectively works are divided between the VM, which in turn will lead to better response time for the user request and improve cloud resource utilization. That is to mean, subsequent research would be some type of unique attempt in the area of load balancing and VM management in cloud computing, based on the Crocodile Optimization Algorithm. This approach improves efficient cloud computing through the balancing of load distribution, maximization of resource utilization, and lowering of response time.
Read MoreDoi: https://doi.org/10.54216/FPA.170205
Vol. 17 Issue. 2 PP. 51-61, (2025)
The pharmaceutical industry encounters numerous challenges in the management of medications and ensuring their authenticity, as well as safeguarding sensitive information within the supply chain. Maintaining the integrity of drug manufacturing processes, transaction records, and patient data from unauthorized access or tampering is crucial. Any breach in security could undermine trust throughout the entire supply chain. To mitigate these concerns, a multi-layered approach is employed. Initially, data encryption using QR codes with Attribute-Based Encryption provides a foundation for securing information. This is followed by an innovative strategy that combines Red Panda Optimization (RPO) Algorithm and Group Teaching Optimization algorithms (GTOA) to optimize encryption key selection. Finally, Multi-Party Computation (MPC) protocols along with Shamir's Secret Sharing enhances overall security measures. These procedures ensure that only authorized individuals have access to critical information essential for identifying counterfeit products and maintain confidentiality through Secure MPC verification without compromising sensitive details.
Read MoreDoi: https://doi.org/10.54216/FPA.170206
Vol. 17 Issue. 2 PP. 62-78, (2025)
Distributed Denial of Service (DDoS) attacks pose a significant threat to cloud computing environments, necessitating advanced detection methods. This review examines the application of Machine Learning (ML) and Deep Learning (DL) techniques for DDoS detection in cloud settings, focusing on research from 2019 to 2024. It evaluates the effectiveness of various ML and DL approaches, including traditional algorithms, ensemble methods, and advanced neural network architectures, while critically analyzing commonly used datasets for their relevance and limitations in cloud-specific scenarios. Despite improvements in detection accuracy and efficiency, challenges such as outdated datasets, scalability issues, and the need for real-time adaptive learning persist. Future research should focus on developing cloud-specific datasets, advanced feature engineering, explainable AI, and cross-layer detection approaches, with potential exploration of emerging technologies like quantum machine learning.
Read MoreDoi: https://doi.org/10.54216/FPA.170207
Vol. 17 Issue. 2 PP. 79-97, (2025)
Recently, Federated Learning (FL) has promptly gained aggregate interest owing to its emphasis on the data privacy of the user. As a privacy-preserving distributed learning algorithm, FL enables multiple parties to construct machine learning (ML) algorithms without exposing sensitive information. The distributed computation of FL may lead to drawn-out learning and constrained communication processes, which necessitate client-server communication cost optimization. The two hyperparameters that have a considerable effect on the FL performance are the number of local training passes and the ratio of chosen clients. Owing to training preference across different applications, it is challenging for the FL practitioner to manually choose these hyperparameters. Even though FL has resolved the problem of collaboration without compromising privacy, it has a transmission overhead because of repetitive model updating during training. Various researchers have introduced transmission-effective FL techniques for addressing these issues, but sufficient solutions are still lacking in cases where parties are in charge of data features. Therefore, this study develops an Optimization of Federated Learning Communication Costs through the Implementation of the Cheetah Optimization Algorithm (OFLCC-COA) technique. The OFLCC-COA technique is mainly applied for effectually optimizing the communication process in the FL to minimize the data transmission cost with the guarantee of enhanced model accuracy. The OFLCC-COA technique enhances the robust performance in unsteady network environment via the transmission of score values instead of large weights. Besides, the OFLCC-COA technique improves the communication efficiency of the network by transforming the form of data that clients send to servers. The performance analysis of the OFLCC-COA model occurs utilizing different performance measures. The simulation outcomes indicated that the OFLCC-COA model obtains superior performances over other methods in terms of distinct metrics
Read MoreDoi: https://doi.org/10.54216/FPA.170208
Vol. 17 Issue. 2 PP. 98-110, (2025)
This is in preparation to stand out in urban connectivity to be used faster for Multi-Intelligent Reflecting Surfaces (Multi-IRS) in the latest thirst response. It will determine in advance the application of IRS technology for electromagnetic wave control, so that it is fine-tuned at full power to boost signal transmission and coverage across the urban areas in high-density population. It outlines flexible strategies on how to integrate the Multi-IRS system with both past and urban future establishments in a view of making connected connectivity. In reality, multi-IRS integrated with foundational smart city technologies such as IoT, 5G networks, AI, and others are nothing but a leap toward accomplishing unparalleled data flow and connectivity, both very essential for the modern urban ecosystem. Detailed case studies have demonstrated how multi-IRS systems can enable the breaking of traditional barriers in connectivity: more essentially, it can offer higher bandwidth, lower latency, and increased communication effectiveness. This development marks one of the serious steps under the concept of smart cities, where the data will be spreading and flowing without barriers between the multifarious urban systems and services. Lastly, the paper concludes with a future-looking view of urban connectivity underscored through continuous innovation and research of multi-IRS applications within the smart city landscape. The study points out the fact that dynamic IRS implementation creates an indispensable role in the pathway for upcoming development in smart city connectivity solutions, thus making a case for sustained collaborative efforts in research, policy formulating, and technological innovation for realizing the full potential of IRS technology in taming the connectivity challenges of contemporary urban settings. Performance comparison between a sequential beam search and a proposed model across varying Rician Factors, showing the proposed model's superior channel gain progression from -57 dB at 5 dB to -48 dB at 30 dB, outperforming the sequential method in environments with strong direct signals.
Read MoreDoi: https://doi.org/10.54216/FPA.170209
Vol. 17 Issue. 2 PP. 111-122, (2025)
Machine- and deep-learning techniques have been used in numerous real-world applications. One of the famous deep-learning methodologies is the Deep Convolutional Neural Network. AlexNet is a well-known global deep convolutional neural network architecture. AlexNet significantly contributes to solving different classification problems in different applications based on deep learning. Therefore, it is necessary to continuously improve the model to enhance its performance. This survey study formally defined the AlexNet architecture, presented information on current improvement solutions, and reviewed applications based on AlexNet improvements. This work also presents a simple survey based on a fusion of AlexNet with different machine-learning techniques for recent research in biomedical applications. In the survey results for about 11 research papers for both improvement and fusion techniques of AlexNet, it was clear that the fusion was the superior one with 99.72, and the improved one was 99.7%. In the conclusion and discussion section, there was a comparison between the improved techniques and fusion techniques of AlexNet and a proposal for future work on AlexNet development.
Read MoreDoi: https://doi.org/10.54216/FPA.170210
Vol. 17 Issue. 2 PP. 123-146, (2025)
Remote sensing (RS) object detection is extensively applied in the fields of civilian and military. The important role of remote sensing is to identify objects like planes, ships, harbours airports, etc., and then it can attain position information and object classification. It is of considerable importance to use RS images for observing the densely organized and directional objects namely ships and cars parked in harbours and parking areas. The object detection (OD) process involves object localization and classification. Due to its wide coverage and longer shooting distance, Remote sensing images (RSIs) have hundreds of smaller objects and dense scenes. Deep learning (DL), in particular convolution neural network (CNN), has revolutionized OD in different fields. CNN is devised to automatically learn the hierarchical representation of data, which makes them fit for feature extraction. Hence, the study proposes a new white shark optimizer with DL-based object detection and classification on RSI (WSODL-ODCRSI) method. The purpose of the WSODL-ODCRSI model is to classify and detect the presence of the objects in the RSI. To accomplish this, the WSODL-ODCRSI model uses a modified single-shot multi-box detector (MSSD) for the OD process. The next stage of OD is the object classification process, which takes place with the use of the Elman Neural Network (ENN) algorithm. The WSO algorithm is exploited as a parameter-tuning model for improving the object classification results of the ENN approach. The stimulated study of the WSODL-ODCRSI algorithm has been established on the benchmark data set and the outcomes underlined the promising performance of the WSODL-ODCRSI model on the object process of classification
Read MoreDoi: https://doi.org/10.54216/FPA.170211
Vol. 17 Issue. 2 PP. 147-160, (2025)
Four kinds of smaller molecules known as ribonucleotide bases-adenine (A), cytosine (C), guanine (G), and uracil (U) combine to form the linear molecule known as ribonucleic acid (RNA). Aligning multiple sequences is a fundamental task in bioinformatics. This paper studies the correlation of different objective functions applying to RNA multiple sequence alignment (MSA) fusion generated by the Harmony search-based method. Experiments are performed on the BRAliBase dataset containing different numbers of test groups. The correlation of the alignment score and the quality obtained is compared against coffee, sum-of-pairs (SP), weight sum-of-pairs (WSP), NorMD, and MstatX. The results indicate that COFFEE and SP objective functions achieved a correlation coefficient (R²) of 0.96 and 0.92, respectively, when compared to the reference alignments, demonstrating their effectiveness in producing high-quality alignments. In addition, the sum-of-pairs takes less time than the COFFEE objective function for the same number of iterations on the same RNA benchmark.
Read MoreDoi: https://doi.org/10.54216/FPA.170201
Vol. 17 Issue. 2 PP. 1-10, (2025)
Glaucoma is a common disease affecting the human retina, primarily caused by elevated intraocular pressure. Early intervention is crucial to prevent damage to the affected organs, which could lead to their dysfunction. This paper focuses on enhance diagnosis accuracy of the system to determine if a patient is at risk of developing glaucoma. In this paper a novel convolutional neural network (CNN) designed, specifically for the detection of glaucoma in fundus images. This architecture optimizes for the unique characteristics of fundus imagery, enhancing detection accuracy, and also compiled a large and diverse dataset of fundus images, crucial for training and validating our CNN model. The dataset includes a significant number of images with detailed annotations, ensuring robust model training. In addition, implemented sophisticated image preprocessing methods to enhance the quality of the fundus images. These techniques, including noise reduction and contrast enhancement, significantly improve the input data quality for the CNN. The system operates in three stages. First, it preprocesses the image by cropping, enhancing, and resizing it to a consistent 256×256 pixels. Next, it employs an advanced feature extraction to analyses key features of the optic disc and optic cup in retinal images. Finally, the Soft-Max function classifies the images, identifying those with glaucoma and distinguishing them from normal eye samples. The model's performance was thoroughly evaluated using various metrics like accuracy, Sensitivity, specificity, and the area under the curve are metrics used to evaluate the performance of a diagnostic test. Sensitivity measures the test's ability to correctly identify positive cases, specificity assesses its accuracy in identifying negative cases, and the area under the curve indicates the overall effectiveness of the test across different thresholds. The results achieved by the proposed system were thoroughly analyzed, revealing a high accuracy rate in glaucoma classification, reaching 99%.
Read MoreDoi: https://doi.org/10.54216/FPA.170202
Vol. 17 Issue. 2 PP. 11-23, (2025)
Brain tumors (BT) are a difficult and dangerous medical condition, and the accurate and early analysis of these tumors is crucial for suitable treatment. Explainability in clinical image diagnosis role a vital play in the correct analysis and treatment of tumors that supports medical staff's optimum understanding of the image analysis performances rely upon deep methods. Artificial intelligence (AI), in certain deep neural networks (DNNs) has attained remarkable outcomes for clinical image analysis in many applications. However, the need for explainability of deep neural approaches has been assumed that major restriction before executing these approaches in medical practice. Explainable AI, or XAI, is a vital module in this context as it supports medical staff and patients in understanding the AI's decision-making model, enhancing trust and transparency. It leads to optimum patient care and performance but making sure that medical staff can make learned decisions depends on AI-driven insights. Therefore, this study develops a novel Computer-Aided Brain Tumor Diagnosis using Coati Optimization Algorithm with an Explainable Artificial Intelligence (CABTD-COAXAI) approach. The purpose of the CABTD-COAXAI technique is to exploit XAI and hyperparameter-tuned deep learning (DL) approaches for automated BT analysis. To accomplish this, the CABTD-COAXAI technique follows a Gaussian filtering (GF) based noise removal process. Besides, the CABTD-COAXAI technique utilizes the EfficientNetB7 methods for the feature extraction process. Additionally, the hyperparameter tuning of the EfficientNetB7 method is performed by the use of COA. Furthermore, the classification of the BT process can be performed by the usage of a convolutional autoencoder (CAE). Finally, the CABTD-COAXAI system combines the XAI method named LIME to effectively understand and explainability of the black-box model for automated BT diagnosis. The simulation result of the CABTD-COAXAI technique has been tested on a benchmark BT database. The extensive outcomes inferred that the CABTD-COAXAI method reaches superior performance over other models in terms of different measures
Read MoreDoi: https://doi.org/10.54216/FPA.170203
Vol. 17 Issue. 2 PP. 24-37, (2025)
Advanced imaging in medical has become crucial in the early identify diseases because they reveal the important structural features of the human body. But it is almost impossible to get such high resolution images in real life situation due to the factors such as image capture and processing equipment, and environmental factors that affect the outcome of the image. This work proposes a sub-type of GAN that is used in enhancement of images particularly in medical fields. The generator of the Med-GAN extracts a high-resolution image from a low-resolution one with the help of novel features learned by the model. The approach of reconstructing high resolution from multiple parallel streams of lower resolution employs deconvolution algorithms with multiple scale fusions that produce better high resolution representations as compared to the technique of bilinear interpolation. The performances of the proposed Med-GAN are tested on two publicly available COVID-19 CT datasets and one private medical image dataset which shows that the proposed method outperforms the existing methods in performance comparisons. Consequently, for PSNR, the score improves from 24.103 dB corresponding to the Initial Approach of the “BRaTS (FLAIR)” dataset to 25.496 dB for the Proposed Method; whereas for SSIM the score increases from 0.782 to 0.812.se types of high-resolution images are usually impossible to get due to limits in imaging devices, environmental conditions, and human factors. This work proposes the Med-GAN: an Enhanced Super-Resolution Generative Adversarial Network tuned for medical image enhancement. The Med-GAN generator learns high-resolution representations from low-resolution images via advanced feature extraction methods. Deconvolution algorithms with multi-scale fusions recover better high-resolution representations from multiple parallel streams of lower resolutions in this approach compared to traditional bilinear interpolation methods. Evaluated on two publicly available COVID-19 CT datasets and one custom medical image dataset, the proposed Med-GAN significantly outperforms existing techniques in performance comparisons. In particular, PSNR rises from 24.103 dB for the "BRaTS (FLAIR)" dataset in the Initial Approach to 25.496 dB in the Proposed Method, while SSIM increases from 0.782 to 0.812. If that is the case then it could be said that the solution of the proposed Med-GAN is one of the most realistic means for improving the quality of medical images and therefore contributes to better diagnostics of diseases
Read MoreDoi: https://doi.org/10.54216/FPA.170214
Vol. 17 Issue. 2 PP. 186-196, (2025)
The current Internet era is characterized by the widespread circulation of ideas and viewpoints among users across many social media platforms, such as microblogging sites, personal blogs, and reviews. Detecting fake reviews has become a widespread problem on digital platforms, posing a major challenge for both consumers and businesses. Due to the ever-increasing number of online reviews, it is no longer possible to manually identify fraudulent reviews. Artificial intelligence (AI) is essential in addressing the problem of identifying fake reviews. Feature extraction is a crucial stage in detecting fake reviews, and successful feature engineering techniques can significantly improve the accuracy of opinion extraction. The paper compares five feature extraction methods for multiple opinion classification using Twitter on airline and Borderland game reviews. FastText with X-GBoost classifier outperformed all other techniques, achieving 94.10% accuracy on the airline dataset and 100% accuracy in Borderland game reviews.
Read MoreDoi: https://doi.org/10.54216/FPA.170212
Vol. 17 Issue. 2 PP. 161-172, (2025)
Cardiovascular Disease (CVD) mainly affects the blood vessels and heart such as coronary artery disease, stroke, and heart failure. Early recognition is vital for on-time intervention and enhanced patient results. CVD is a major issue in society nowadays. When compared to the non-invasive model, the electrocardiogram (ECG) is the most effective approach for identifying cardiac defects. However, ECG analysis needs an experienced person with high knowledge and basically, it is a time-consuming task. Emerging a new technique to identify the disease at an early stage increases the quality and efficacy of medicinal care. A state-of-the-art technologies like machine learning (ML) and artificial intelligence (AI) have been gradually being used to increase the efficacy and accuracy of CVD recognition, permitting for faster and more exact analysis, and finally contributing to superior management and prevention tactics for CV health. This research paper designs an Early Cardiovascular Disease Prediction using an Improved Beluga Whale Optimizer with Ensemble Learning (ECVDP-IBWOEL) approach via ECG Signal Analytics. The main intention of the ECVDP-IBWOEL system is to forecast the presence of CVD at the early stage using EEG signals. In the ECVDP-IBWOEL method, the primary phase of data preprocessing is initially implemented to convert the input data into a well-suited layout. Also, the ECVDP-IBWOEL technique follows an ensemble learning (EL) process for CVD detection comprising three models namely long short-term memory (LSTM), deep belief networks (DBNs), and stacked autoencoder (SAE). Finally, the IBWO algorithm-based hyperparameter tuning process takes place which can boost the classifier results of the ensemble models. To certify the enhanced results of the ECVDP-IBWOEL system, an extensive experimental study is made. The experimentation outcomes stated that the ECVDP-IBWOEL system underlines promising performance in the CVD prediction process
Read MoreDoi: https://doi.org/10.54216/FPA.170213
Vol. 17 Issue. 2 PP. 173-185, (2025)
Green buildings are those that use sustainable methods of construction to either maintain or improve the local quality of life. Decisions affecting a project's quality, safety, profitability, and timetable are made using Artificial Intelligence (AI) in Green Construction by analyzing data gathered from monitoring the construction site and using predictive analytics. For instance, increased accuracy in weather predictions might lead to more production, less waste, lower costs, and less greenhouse gas emissions. Green building construction is a significant source of carbon dioxide released through the breakdown of carbonates. Researchers have concluded that integrating industrial wastes is crucial in green concrete making due to its benefits, such as reducing the requirement for cement. When planning with concrete, its compressive strength must be considered. Due to their high predictive power, AI algorithms may be used to determine the compressive strength of concrete mixtures. Existing artificial intelligence (AI) models may be evaluated for their modeling process and accuracy to inform the creation of new models that more accurately represent the comprehensive evaluation of setting parameters on model performance and boost accuracy. Potential sources of conflict in this anthropocentric future include climate change and the availability of renewable energy sources. Scientists think there is a connection between the increased emission of greenhouse gases like carbon dioxide (Co2) from the combustion of fossil fuels and the acceleration of climate change and global warming. Research has demonstrated that the building sector is a significant source of atmospheric carbon dioxide (Co2). Construction, building activities, and subpar energy sources have all significantly increased atmospheric CO2. The proposed research set out to measure how well AI in Green Building Construction (AI-GBC) might reduce carbon emissions and utility bills. Artificial intelligence uses SVM and GA to reduce energy use and carbon dioxide emissions. Several statistical metrics, such as Mean Absolute Error (MAE), Root Mean Square Error (RMSE), and Root Mean Squared Log Error (RMSLE), are used to evaluate the AI-GBC's precision. Both Machine Learning (ML) models yielded positive results, with prediction accuracies above 95%. Regarding predicting Co_2, GA models were close to the mark, with an R2 of 0.95. Ninety-six percent will complete a performance analysis, and 97% will conduct a k-fold cross-validation analysis. Cross-validation is used to ensure that the findings of the extended modeling technique are accurate and prevent overfitting.
Read MoreDoi: https://doi.org/10.54216/FPA.170215
Vol. 17 Issue. 2 PP. 197-210, (2025)
This paper proposes a better solution for EEG-based brain language signals classification, it is using machine learning and optimization algorithms. This project aims to replace the brain signal classification for language processing tasks by achieving the higher accuracy and speed process. Features extraction is performed using a modified Discrete Wavelet Transform (DWT) in this study which increases the capability of capturing signal characteristics appropriately by decomposing EEG signals into significant frequency components. A Gray Wolf Optimization (GWO) algorithm method is applied to improve the results and select the optimal features which achieves more accurate results by selecting impactful features with maximum relevance while minimizing redundancy. This optimization process improves the performance of the classification model in general. In case of classification, the Support Vector Machine (SVM) and Neural Network (NN) hybrid model is presented. This combines an SVM classifier's capacity to manage functions in high dimensional space, as well as a neural network capacity to learn non-linearly with its feature (pattern learning). The model was trained and tested on an EEG dataset and performed a classification accuracy of 97%, indicating the robustness and efficacy of our method. The results indicate that this improved classifier is able to be used in brain–computer interface systems and neurologic evaluations. The combination of machine learning and optimization techniques has established this paradigm as a highly effective way to pursue further research in EEG signal processing for brain language recognition.
Read MoreDoi: https://doi.org/10.54216/FPA.170216
Vol. 17 Issue. 2 PP. 211-218, (2025)
Epilepsy is a neural condition that is rather prevalent and affects a sizeable portion of the average population all over the world. Throughout its history, the illness has constantly be located of significant status in the pitch of biomedicine due to the dangers it poses to people's health. Electroencephalogram (EEG) recordings are a method that may be utilized to evaluate epilepsy, which is defined by the occurrence of seizures that occur repeatedly and without any apparent cause. Electroencephalography, often known as EEG, is a method that is utilized to assess the electric movement located within the brain. The examination of electroencephalogram data is an essential component in the field of epilepsy research, since it allows for the early detection of epileptic episodes. On the other hand, the generation of models that are independent of individual characteristics is a significant challenge. Extensive efforts have been directed to the creation of classifiers that are tailored to specific patients. In this thesis, the cross-patient viewpoint is the primary focus of investigation; nevertheless, the heterogeneity of EEG patterns among people presents a challenge to this investigation. An examination of the similarities and differences of the pattern recognition algorithms that are applied for the diagnosis of epileptic episodes based on EEG data was taken. SVM (Support Vector Machine) and KNN (K-Nearest Neighbor) were the approaches that were under consideration for evaluation. According to the findings of our analysis, the two approaches exhibit comparable levels of performance; however, KNN attained a slightly greater level of accuracy in some situations on occasion.
Read MoreDoi: https://doi.org/10.54216/FPA.170217
Vol. 17 Issue. 2 PP. 219-231, (2025)
In the realm of education, understanding the impact of different teaching styles on student engagement and satisfaction is essential. Recent advancements in sentiment analysis provide new avenues for evaluating student feedback, particularly through informal channels such as social media. While formal student evaluations offer structured feedback on teaching styles, they may not fully capture the nuanced opinions and sentiments expressed by students in informal settings, such as social media. This research aims to address the gap by integrating sentiment analysis of social media data to evaluate teaching effectiveness across various styles and comparing it with formal evaluation results. This study employs sentiment analysis using the VADER (Valence Aware Dictionary and sEntiment Reasoner) tool to analyze student posts on social media platforms. The analysis includes the extraction of sentiment distributions, identification of common keywords, and tracking of sentiment trends over time. Additionally, formal student evaluations (Likert scale) are collected to offer a direct comparison. The teaching styles analyzed include lecture-based teaching, project-based learning, flipped classrooms, online learning, hybrid learning, and traditional exam-based learning. The findings demonstrate that student sentiment varies significantly across teaching styles. Flipped classrooms and project-based learning received the highest positive sentiment scores, while traditional exam-based teaching showed the most negative sentiment. Social media feedback tended to align with formal evaluations for certain teaching styles, such as the flipped classroom and hybrid learning but showed divergence in others, like online learning, which received higher sentiment in social media feedback. Trends over time reveal evolving sentiments, with fluctuating satisfaction as the academic semester progressed. The integration of social media sentiment analysis provides a more dynamic and real-time understanding of student experiences, offering deeper insights into teaching style effectiveness.
Read MoreDoi: https://doi.org/10.54216/FPA.170218
Vol. 17 Issue. 2 PP. 232-248, (2025)
Solving the video compression problem requires a multi-faceted approach, balancing quality, efficiency, and computational demands. By leveraging advancements in technology and adapting to the evolving needs of video applications, it is possible to develop compression methods that meet the challenges of the present and future digital landscape. To address these objectives, machine learning and AI approaches can be utilized to predict and remove redundancies more effectively, optimizing compression algorithms dynamically based on content. Still, state-of-the art neural network-based video compression models need large and diverse datasets to generalize well across different types of video content. Wavelets can provide both time (spatial) and frequency localization, making them highly effective for video compression. This dual localization allows wavelet transforms to handle both rapid changes in video content and slow-moving scenes efficiently, leading to better compression ratios. Yet, some wavelet coefficients may be more critical for maintaining visual quality than others. Inaccurate quantization can lead to noticeable degradation. For the first time, the suggested model combine Quantum Wavelet Transform (QWT) and Neural Networks (NN) for video compression. This fusion model aims to achieve higher compression ratios, maintain video quality, and reduce computational complexity by utilizing QWT’s efficient data representation and NN’s powerful pattern recognition and predictive capabilities. Quantum bits (qubits) can encode large amounts of information in their quantum states, enabling more efficient data representation. This is especially useful for encoding large video files. Furthermore, quantum entanglement allows for correlated data representation across qubits, which can be exploited to capture intricate details and redundancies in video data more effectively than classical methods. The experimental results reveal that QWT achieves a compression ratio of almost twice that of traditional WT for the same video, maintaining superior visual quality due to more efficient redundancy elimination.
Read MoreDoi: https://doi.org/10.54216/FPA.170219
Vol. 17 Issue. 2 PP. 249-263, (2025)
This review provides an in-depth exploration of machine learning (ML) applications in healthcare, focusing specifically on predictive models for COVID-19 transmission among vaccinated individuals. It underscores the pivotal role of ML in disease forecasting and prognosis, showcasing its potential to enhance healthcare outcomes in pandemic contexts. Key challenges of COVID-19, such as the high transmission rate of asymptomatic carriers and the effectiveness of containment strategies, are analyzed to highlight areas where ML can offer significant advantages. The study aims to develop an advanced forecasting model for COVID-19 transmission using diverse supervised ML regression techniques, including linear regression, LASSO, support vector machine, and exponential smoothing, applied to an extensive COVID-19 patient dataset. The insights generated from this review support efforts to combat COVID-19 and improve public health strategies, demonstrating ML's vital contribution to pandemic management and healthcare resilience.
Read MoreDoi: https://doi.org/10.54216/FPA.170220
Vol. 17 Issue. 2 PP. 264-278, (2025)
Retinopathy is a progressive and common retinal disease that most progressive diabetics suffer from and causes blood vessels in the retina to swell and leak blood and fluid. This condition requires timely diagnosis via medical experts to prevent causing visual loss among patients. To enhance the feasibility of checking many persons, diverse deep-learning schemes have recently been developed for diabetic retinopathy detection. In this paper, retinopathy image detection system based on diverse deep learning schemes (VGG-19, DenseNet-121, and EfficientNet-B6) has been presented. The implemented deep learning schemes with multi-label classification are trained and tested using the Asia Pacific Tele Ophthalmology Society (APTOS-2019) dataset, and the two combined datasets Indian Diabetic Retinopathy Image Dataset (IDRiD) and Messidor-2. The system outcomes of classification are exhibited as sensitivity, precision, F1Score, and accuracy measurements, and the system performance is compared with recently existing related systems. The attained outcomes indicate that the implemented EfficientNetB6 network outperforms peers’ schemes and related systems via realizing supreme accuracy using balanced multi-class retinopathy datasets.
Read MoreDoi: https://doi.org/10.54216/FPA.170221
Vol. 17 Issue. 2 PP. 279-293, (2025)
Electric vehicles (EVs) have gained significant traction due to their environmental benefits and potential to revolutionize the transportation sector. Integrating EVs into the Vehicle-to-Grid (V2G) network presents an innovative solution for optimizing energy transactions and grid stability. However, managing energy transactions during peak hours poses a challenge. This research proposes a novel approach that combines the Deep Q-Network (DQN) algorithm with block chain technology to enhance energy transactions in the V2G network. In this study, a V2G network model is introduced consisting of EVs, charging stations, a grid control center, and a block chain infrastructure. The block chain ensures transparency, security, and decentralized energy transactions. The DQN algorithm learns optimal action policies based on current states and rewards, contributing to grid stability. To incentivize EV owners for peak-hour energy contributions, a block chain-enabled rewarding mechanism is implemented. The proposed methodology is rigorously evaluated through simulations conducted in a custom environment that emulates V2G network dynamics. Performance metrics such as load shifting efficiency, peak demand reduction, and energy efficiency are employed for comprehensive assessment. The proposed method showcases superior performance compared to traditional load shifting and demand response strategies. Furthermore, comparative analyses are conducted against different state-of-the-art methods, demonstrating the effectiveness of our approach. The results underscore the potential of integrating DQN-based energy management with block chain technology to achieve grid stability and incentivize sustainable energy behaviors. This research contributes to the advancement of smart grid technologies, paving the way for a more sustainable and efficient energy ecosystem.
Read MoreDoi: https://doi.org/10.54216/FPA.170222
Vol. 17 Issue. 2 PP. 294-314, (2025)
Recently, it has been observed that the tourism industry is undergoing a fundamental change due to the rapid development of virtual tour technologies, especially artificial intelligence. This paper therefore aims to provide an overview of this new development from the early 2000s to the current environment in global tourism. We present, in a historical context, the main developments and applications of virtual tours and AI through a systematic review of literature, industry reports and empirical data from different sectors of the tourism industry. Our findings suggest that the adoption of the technologies under review, enhanced by data fusion, has significantly reshaped the way tourism experiences are conceptualized, delivered, and consumed. Data fusion combines information from multiple sources, enabling richer insights and a more comprehensive understanding of traveller behaviours and preferences. While virtual tours have emerged as a powerful tool for destination marketing, cultural preservation, and accessibility, AI, combined with data fusion, has also transformed the landscape by enabling more personalized travel planning, responsive customer service, and data-driven decision-making. This integration allows tourism providers to create seamless and engaging experiences tailored to individual needs, making tourism more accessible and efficient. In each case, these innovations have raised important questions about authenticity, sustainability, and the future of traditional tourism business models. We will present a critical comparison of virtual and physical tourism experiences in different regions and market segments, providing insights into the interplay of technological innovation, economic imperatives, and socio-cultural dynamics in the digital age. We conclude by reflecting on the implications for post-pandemic recovery, responsible tourism and global cultural exchange through virtual tours and AI. The findings of the study add to the growing body of knowledge on the digitalization of tourism and provide useful insights for practitioners, policy makers and researchers interested in the rapidly changing landscape of this industry.
Read MoreDoi: https://doi.org/10.54216/FPA.170223
Vol. 17 Issue. 2 PP. 315-328, (2025)
This study aimed to measure the impact of using "Box-to-Box" technology in improving physical and technical abilities for football players under 19 years old at Najma Sinai Sports Club, North Sinia the research highlights the global appeal of football but also offers insight into how advancements in training can help to improve player performance, some teams tend to cling old-school tactics which undermine progress. The study evaluated a 12-week "Box-to-Box" training program using an experimental design with pre and post intervention measurements for 23 players. The results showed that while agility, endurance, speed, and muscle strength test scores significantly improved; passing accuracy and dribbling efficiency were also enhanced during composite skill performance. These findings reaffirm that "Box-to-Box" Training is the way to go for developing key competencies and improving performance, in general. The study suggests including this new technology in traditional training routines, asserting that it has now become essential for player assessment and improvement. It also proposes a wider perspective on the long-term use of "Box-to-Box" technology in different populations and sports, as well as new functional training for specific football positions.
Read MoreDoi: https://doi.org/10.54216/FPA.170224
Vol. 17 Issue. 2 PP. 329-341, (2025)
Evaporation plays a significant role in managing water resources and is an important indicator in risk and crisis management, particularly in operating reservoirs and dams. Precise predictions of evaporation rates are crucial to effective water resource management, and various modelling methods, including AI and autoregression, have been employed to create accurate models. This makes it more important to use innovative technology to continuously monitor this phenomenon with accurate scientific results, allowing decision-makers to be aware of and prepare for potential drought risks and crises. In this study, therefore, we propose the establishment of a mechanism that would include analyzing and exploring the data used in this study (Evaporation) and cleaning up the impurities of actual and lost values to obtain accurate data that would serve as actual inputs to ARIMA model that will adopt in this study, This mechanism would contribute to the performance and efficiency of this model using time series data to accurately predict future trends of evaporation plants in the water of the Mosul dam. Our objective is to explain the diversity of climate policies and actions using a data-based approach to analyzing integrated parameters over the years, etc. This is complemented in depth by how different methods of extracting data behaviour are used to study model forecasts. This collaborative study aims to enhance future studies by using more comprehensive datasets with more learning models. The researchers believe in the power of sharing knowledge and are thus committed to sharing the results of other causes outside of global warming that contribute to climate change.
Read MoreDoi: https://doi.org/10.54216/FPA.170225
Vol. 17 Issue. 2 PP. 342-355, (2025)
Artificial intelligence techniques including deep learning play a major role in all fields and in line with the advancement in technology. Handwritten digit recognition is an important issue in the field of computer vision, which is used in wide applications such as optical character recognition and handwritten digits. In the current research, we describe a unique deep learning technique that uses a Convolutional Neural Network (CNN) framework with better normalization algorithms and adjusted hyperparameters for improved efficiency as well as generalize. Contrasting conventional techniques, our methodology concentrates on minimizing overfitting through the use of adjustable rate of abandonment and innovative pooling procedures, resulting in greater accuracy in handwriting number classification. Following considerable research, the recommended approach obtains an outstanding classification accuracy of 99.03%, proving its ability to recognize intricate structures in handwritten numbers. The approach's usefulness is reinforced by a complete review of measures including recall, accuracy, F1 score, as well as confuse matrix assessment, which show improvements throughout all digit categories. . The results of the investigation highlight the innovative conceptual layout and optimization methodologies used, representing a substantial leap in the realm of number identification.
Read MoreDoi: https://doi.org/10.54216/FPA.170226
Vol. 17 Issue. 2 PP. 356-365, (2025)
An inventive deep learning-based method for identifying financial fraud, revolutionizing e-commerce security in the process. The research offers a state-of-the-art setup that makes use of deep learning computations in the dynamic world of online exchanges, where the possibility of fraudulent activity is a danger. Since frauds are known to be erratic and lack consistency, it might be challenging to spot them. Con artists exploit the latest developments in technology. They manage to evade security measures, which results in millions of dollars being lost. One method of tracking fraudulent exchanges is to use information-mining techniques to investigate and detect unusual behaviours. Interactions. In contrast to deep learning techniques as auto encoders, convolutional neural networks (CNN), restricted Boltzmann machines (RBM), and deep belief networks (DBN), this paper aims to benchmark several machine-learning techniques, such as k-nearest neighbour (KNN), irregular forest, and support vector machines (SVM). The three-evaluation metrics that are really employed are the Area Under the ROC Curve (AUC), the Matthews Correlation Coefficient (MCC), and the Cost of Failure.
Read MoreDoi: https://doi.org/10.54216/FPA.170227
Vol. 17 Issue. 2 PP. 366-376, (2025)
Steganalysis can be defined as the science that addresses the process of identifying and detecting hidden information or data within various types of digital media. Recently, Deep Learning (DL) approaches have been employed to build steganalysis systems. However, the problem with steganalysis systems adopting a DL approach is their low accuracy and their need for effective datasets to be used for the training. In this paper, we introduce a DL-based Steganalysis system for the detection and classification of hidden content in images. Our system, called Steg-Analysis Convolutional Neural Network (SA-CNN), relies on a Convolutional Neural Network (CNN) and uses High Pass Filter (HPF) and extra-embedded data. We also propose a preprocessing-based data hiding method to increase the accuracy of SA-CNN in detecting hidden content. Therefore, this ensures the imperceptibility of images used for training SA-CNN. In addition, we use another CNN, called Malicious-Benign Classification CNN (MBC-CNN), that we have developed to classify the extracted hidden content into Malicious or Benign classes. Compared with existing systems, SA-CNN shows a better performance in terms of accuracy, under increased hiding rates ranging from 0.1 to 1.0 bpp, reaching 90%.
Read MoreDoi: https://doi.org/10.54216/FPA.170228
Vol. 17 Issue. 2 PP. 377-393, (2025)
Spam e-mail has become a pervasive nuisance in today's digital world, posing significant challenges to efficient communication and information dissemination. Dealing with huge amounts of data with irrelevant and redundant features, which leads to high dimension. Nowadays, with the growth of using the internet, finding the secure E-mail classification system for cloud computing is a very important topic. Additionally, determining the best algorithm for choosing a subset of features has a big impact on how well automatic email classification works, making it one of the major issues. Among these is the Differential Evolution (DE) algorithm, which is computationally costly because of the slow convergence rate and evolutionary process. To address these issues, this study offers an intelligent scheme called Opposition Differential Evolution (ODE), which combines the Opposition Based Learning (OBL) and DE algorithms for effective automated feature subset selection. Its effectiveness is assessed using the support vector machine (SVM) to present a strong performance when evaluating the e-mail spam classification rate. Moreover, the OBL is used to accelerate and increase the convergence rate of traditional DE. To determine which features, contribute most to the reliability of the email spam classification, the subset features based on ODE that was used as feature subset selection are used.To assess the effectiveness of the proposed scheme, extensive experiments are conducted on spambase” and “spamassassin” benchmark email datasets, comprising a diverse collection of spam and non-spam emails. The results demonstrate that the Opposition Differential Evolution (ODE) algorithm yields superior performance compared to traditional machine learning and evolutionary techniques, displaying its robustness and efficiency in identifying spam emails accurately. The ODE algorithm effectively handles high-dimensional feature spaces, enhancing the model's discriminatory power while maintaining computational efficiency. Compared to the suggested ODE-SVM technique, which yields a result of 96.79 percent, the full-feature accuracy result was 93.55 percent. Additionally, empirical results demonstrate that our scheme may efficiently increase the number of features needed to improve the accuracy of the email spam classification.
Read MoreDoi: https://doi.org/10.54216/FPA.170229
Vol. 17 Issue. 2 PP. 394-408, (2025)
Leukemia is a form of blood cancer that targets white blood cells (WBC) and stands as a major cause of mortality worldwide. During the center of human bones, leukaemia is presented and provides blood cell generation with leukocytes and WBC, and if some cell comes to be blasted, then it grows a fatal illness. For that reason, the analysis of leukaemia in its initial stages aids significantly in the treatment accompanied by saving the life. At present, leukemia analysis is done by visual assessment of biomedical images of blood cells, which is time-consuming, tedious, and wants to train specialists. Consequently, the lack of an early, automatic, and effectual leukemia recognition model is a major problem in hospitals. A few automated techniques like deep learning (DL) and Machine learning (ML) methodologies for leukemia cancer identification are presented that offer remarkable and effectual results. This study develops a Robust Multimodal Fusion of Transfer Learning Framework for Leukemia Cancer Detection and Classification (RMFTLF-LCDC) algorithm. The RMFTLF-LCDC system mostly suggests identifying and classifying the existence of leukemia cancer on biomedical imaging. At first, the RMFTLF-LCDC model applies image preprocessing using a kernel correlation filter (KCF) to eliminate the noise. For the feature extraction process, the multimodal fusion of CapsNet models, including RES-CapsNet, VGG-CapsNet, and GN-CapsNet are implemented to improve the representation of features by providing more accurate initial information to subsequent capsule layers. In addition, the recurrent spiking neural network with the spiking convolutional block attention module (RSNN-CBAM) technique is performed for the leukemia cancer detection process. At last, the improved Harris hawk optimization (IHHO) approach-based hyperparameter choice can be executed to improve the classification outcomes of the RSNN-CBAM system. The efficiency of the RMFTLF-LCDC method has been validated by comprehensive studies using the benchmark image dataset. The numerical result shows that the RMFTLF-LCDC method has better performance and scalability across other recent techniques.
Read MoreDoi: https://doi.org/10.54216/FPA.170230
Vol. 17 Issue. 2 PP. 409-426, (2025)