This paper applies deep learning techniques in classifying X-ray images to detect osteoporosis. Osteoporosis, a bone weakness condition, increases the risk of fractures; therefore, accurate early diagnosis is essential in management. We have designed a hybrid method called Fuzzy Logic Preprocessed Convolutional Neural Network, or FLPCNN, wherein fuzzy logic is used at the preprocessing step to handle uncertainty and imprecision of features extracted from X-ray images. This paper used a dataset of X-ray images, and the FLPCNN model was applied, classifying them into osteoporotic and non-osteoporotic with quite an accuracy of 100%. Fuzzy logic preprocessing combined with Convolutional Neural Networks (CNN) enhances the model’s classification accuracy and interpretable decisions. The proposed method would be a new way to cut down diagnostic errors and improve patient outcomes, opening ways for further research into deep learning techniques applied in healthcare.
Read MoreDoi: https://doi.org/10.54216/FPA.210201
Vol. 21 Issue. 2 PP. 01-21, (2026)
E-government implementation in developing countries faces obstacles and challenges far beyond being a simple technology., By interviewing citizens through its enhanced PEST (Political, Economic, Social, and Technological) analysis and artificial intelligence algorithms, this study systematically evaluates the experiments of Iraq to accommodate e-government service. 1,081 Iraqi citizens were surveyed using mixed methods to quantify their public acceptance and willingness of e-government services, as well as identifying the obstacles. Our investigation finds that data security (mean = 3.59-3.80), the political situation, economic distress, a lack of enthusiasm for change in society, and shortfalls of technological infrastructure are all serious challenges at present. The research used advanced statistical methods, including correlation analysis (0.634 technology-trust relationship), regression models (R ^ 2 = 0.542), factor analysis (KMO = 0.891), and Multi-Layer Perceptron (MLP) neural network algorithms achieved 89.8% prediction accuracy for e-government acceptance. The AI algorithm supported the conclusions drawn from statistical tests, with Technology Readiness and Security Perception rising up as two most significant predictors (23.4% importance for Technology Readiness and 19.8% importance for Security Perception). The findings also propose a novel methodological framework that integrates traditional statistical analysis with machine learning capabilities, rendering concrete recommendations to developing country policy makers. The study's findings imply that successful e-government implementation requires a holistic approach that factors in political, economic, social and technological aspects together. The composite PEST index score of 0.826 smells widespread resistance on the ground, although AI predictive model greatly facilitates forecasting for future e-government initiatives.
Read MoreDoi: https://doi.org/10.54216/FPA.210202
Vol. 21 Issue. 2 PP. 22-41, (2026)
Managing fuel and energy resources (FER) efficiently is still a major challenge for energy-intensive industries like oil and gas. This paper presents a practical framework that combines mathematical models with easy-to-run algorithms to plan and control FER use in real time. Our twin goals are to cut costs and keep equipment dependable. We first outline the main parts of an energy-management system for an oil-and-gas operation, and then list the key tasks, factors, and decision criteria. The framework has two complementary paths: Path 1 relates FER use to production output via Lagrange optimization, while Path 2 fine-tunes forecasts with a simple least-squares correction based on metered data. Both paths are implemented as executable algorithms and tested on real electricity and fuel-gas datasets. The new method cuts monthly FER-planning errors by up to 80 %, reducing penalties and helping equipment last longer.
Read MoreDoi: https://doi.org/10.54216/FPA.210203
Vol. 21 Issue. 2 PP. 42-55, (2026)
The Fourth Industrial Revolution represents a shift to a more connected, digital world across all industries, including mining. The application of smart sensors will reduce site risks and fuel consumption, reduce equipment breakdowns, improve preventative maintenance, and improve equipment efficiency, including dump truck engines. Dump truck fuel efficiency is influenced by a number of real-world factors, including driver behavior, road and weather conditions, and vehicle specifications. Additionally, potential engine failures and other aspects can impact vehicle outcomes. By using dynamic on-road data to predict fuel consumption per trip, the industry can effectively minimize the expense associated with driving evaluations. Furthermore, analysis of data provides valuable insights into identifying the underlying causes of fuel consumption by analyzing input parameters. This paper proposes and evaluates novel models for predicting dump truck fuel consumption and engine failures in open-pit mining. These models combine the power of features derived from data collected locally by dump truck sensors and their analysis. The fuel consumption prediction architecture for open-top mining trucks using an improved Long Short-Term Memory (LSTM) model and a double-layer thick Deep Neural Network (DNN) forms the basis of the model design, which consists of two separate components. Multi-delay Recurrent Neural Network (RNN) models have been found to be efficient and accurate. The RNN architecture is applied to capture the cyclic components and complex rules in engine consumption data. This research relied on essential factors (route, vehicle speed, engine revolutions, and engine load). The proposed model outperforms existing models, achieving (MAE=0.0210), (RMSE=0.0294), (MSE=0.0009), and accuracy (R²=0.9842), demonstrating that it can produce highly accurate predictions.
Read MoreDoi: https://doi.org/10.54216/FPA.210204
Vol. 21 Issue. 2 PP. 56-69, (2026)
Generative AI has made significant strides over the past few years, and this progress has accelerated the development of deepfake techniques, which can unfortunately be used for harmful purposes. It is essential to stay up-to-date with this advancement. In this paper, we present an explainable weighted average fusion deepfake detection system that combines Vision Transformer (ViT) and InceptionResNetV1 to improve classification accuracy. We also employed LIME and GradCAM++ to provide interpretability for the model decision. ViT utilizes self-attention modules to extract features, whereas InceptionResNetV1 employs convolutional layers to extract spatial features. Grad-CAM++ highlights the important regions influencing classification, and LIME examines the regional contributions. Together, these tools offer a deeper understanding of the model's decision-making process. Our fusion technique combines the outputs of both models by assigning specific weights that users can adjust interactively through the user interface. The use of these tools gives a better understanding of how the model classifies, which improves transparency and reliability in the models. The performance of the fusion strategy is tested with accuracy, precision, recall, and F1-score. Our proposed model achieves a classification accuracy of 99.19%, surpassing both ViT and InceptionResNetV1 when we evaluated them individually. To the best of our knowledge, this work represents the first deepfake detection model that combines Vision Transformer (ViT) and InceptionResNetV1 using a weighted averaging fusion approach with dual explainability techniques.
Read MoreDoi: https://doi.org/10.54216/FPA.210205
Vol. 21 Issue. 2 PP. 70-92, (2026)
Automated detection (AD) techniques are essential for early recognition of skin cancer. Hybrid models using feature fusion, which combine pre-trained CNNs with customized models, have shown superiority in real-time skin cancer pathology classification. This study combines VGG19 feature maps with a novel learning network based framework called AD_Net to enhance classification accuracy. VGG19 facilitated robust low-level feature extraction, while AD_Net brilliantly extracts specialized patterns. This strategy provided a flexible and fast architecture, suitable for real-time medical applications. This work led to the classification of three of the most lethal skin cancer types. The model was trained and validated using experiments on the publicly available ISIC2019 dataset. In order to improve the interpretability of the model's predictions, interpretable artificial intelligence (XAI) techniques particularly Grad-CAM were applied. Four baseline models EfficientNetB0, MobileNetV2, Inception-v3, and VGG16, were used to assess the proposal's efficacy. The suggested model outperformed the four baseline models with 99.18% accuracy, 99.0% precision, 99.0% recall, and 99.0% F1 score. Dermatologists and other medical professionals can use this method to detect skin cancer early.
Read MoreDoi: https://doi.org/10.54216/FPA.210206
Vol. 21 Issue. 2 PP. 93-103, (2026)
Cloud communication faces numerous disruptive cybersecurity threats. Various issues related to such disruption have been the subject of previous research, but detection attacks in the blade server (BS) in the cloud have not been studied. Therefore, this paper proposes an efficient intrusion detection system (IDS) framework for BS in the cloud. This framework uses Kerberos authentication-based exponential Mestre-Brainstrass curve cryptography, Sechsoftwave and sparsely centric gated recurrent unit (SSGRU). In this framework, cloud users are firstly registered to the network, and then incoming data are encrypted. The BS is then used to balance the incoming loads, and IDS is applied to detect attacks in the BS, with the data being pre-processed firstly and the big data being handled in the IDS. Afterwards, the features are extracted, from which optimal features are selected. Attacked and normal blades are classified by using the SSGRU classifier and then differentiated by generating a Sankey diagram. The attacked blades are then isolated, and the normal blades are used for load balancing on the cloud. Results indicate that this model achieved 99.43% accuracy, thus demonstrating superior performance to other models.
Read MoreDoi: https://doi.org/10.54216/FPA.210207
Vol. 21 Issue. 2 PP. 104-118, (2026)
The generation of cryptographic keys from biometric traits presents an opportunity to replace traditional password-based systems with mechanisms grounded in individual physiology. Nonetheless, reliably deriving secure and reproducible keys from modalities such as fingerprints and irises remains a significant challenge, particularly under varying input conditions and constraints on entropy. In this work, we present a hybrid dual-path deep learning architecture that combines Gated Linear Units (GLUs) with Squeeze-and-Excitation (SE) modules to extract rich, multimodal embeddings from iris and fingerprint images. The model, trained on an augmented cross-modal dataset, achieved a test accuracy of 99.92% and consistently high F1-scores across 50 subjects. To derive the cryptographic key, we apply a multi-stage pipeline that blends principal component projections, distance-based feature encoding, chaotic sequence modeling based on Lorenz-like dynamics, and a lightweight error-correcting routine. These representations are fused via a custom mixing function, producing a 512-bit binary vector subsequently refined using a SHA-256-based HKDF. Evaluation of the generated keys indicates near-ideal entropy, high inter-user separation, and strong avalanche characteristics. The system also passed multiple NIST statistical randomness tests and achieved a near-zero false acceptance rate. These results support the feasibility of the proposed method for secure and repeatable biometric key generation.
Read MoreDoi: https://doi.org/10.54216/FPA.210208
Vol. 21 Issue. 2 PP. 119-148, (2026)
Early and accurate diagnosis of Autism Spectrum Disorder (ASD) using neuroimaging has become increasingly viable with the advent of deep learning (DL) technologies. Current clinical diagnostic processes for ASD are largely subjective and time-intensive, creating an urgent need for objective diagnostic tools. This study presents a comprehensive comparison of three prominent functional Magnetic Resonance Imaging (fMRI) feature extraction methods, ALFF (Amplitude of Low-Frequency Fluctuations), fALFF (fractional ALFF), and ReHo (Regional Homogeneity), alongside structural Magnetic Resonance Imaging (sMRI) data, to evaluate their effectiveness in classifying ASD using various deep learning architectures. Preprocessed data from the ABIDE dataset were utilized, with uniform preprocessing pipelines applied, followed by feature extraction using the AAL (Automated Anatomical Labeling) atlas. Synthetic data augmentation was performed using Generative Adversarial Networks (GANs) to mitigate class imbalance. We trained and tuned multiple models, including 1-dimensional Convolutional Neural Networks (1D CNNs) with multi-head attention, Long Short-Term Memory (LSTM), and Vision Transformers (ViTs), with and without hyperparameter optimization. The findings indicate that the highest classification performance was attained using ALFF features with a hyperparameter-optimized CNN enhanced by attention mechanisms, achieving an accuracy of 0.83. Similarly, ReHo features yielded an equal accuracy of 0.83 when analyzed using a Vision Transformer (ViT) model. Across all experiments, functional neuroimaging features consistently outperformed structural features in classifying ASD. Notably, systematic hyperparameter tuning led to substantial improvements, particularly for ALFF-based models, where accuracy increased markedly from 59% to 83% using the CNN+Attention architecture. This study presents a comprehensive evaluation of feature types and model architectures across neuroimaging modalities, offering critical insights into their relative diagnostic value for ASD. The achieved accuracy of 83% using both ALFF and ReHo features marks a meaningful advancement in the field, setting realistic benchmarks for future research while adhering to stringent methodological rigor.
Read MoreDoi: https://doi.org/10.54216/FPA.210209
Vol. 21 Issue. 2 PP. 149-158, (2026)
Vitreoretinal surgery is highly dependent on good visualization of fragile retinal surfaces for the purpose of accurate and safe operation. However, the image quality of current 3D heads-up display systems is often suboptimal, such as low contrast or inadequate sharpness, which is likely to decrease the accuracy of operation and prolong the operation duration. Improving intraoperative image quality continues to be a challenge for the advancement of the surgical results. In this paper, we advocate a deep learning-based solution to optimal imaging parameter guidance for the prospect of 3D HU-image guided VR surgery, seeking to improve vitreoretinal surface visibility during the surgery. A hybrid model that combines a U-Net-based image enhancement with a ViT for feature refinement has been learned using 212 manually optimized still frames (extracted from the ERM surgical video). The performance of the algorithm was quantitatively assessed through peak signal-to-noise ratio (PSNR) and the structural similarity index map (SSIM) and qualitatively evaluated in terms of the improvement in sharpness, brightness, and contrast. Moreover, the in-cabin usability of optimized images was investigated in an intraoperative survey. For in-vitro validation, 121 anonymous high-resolution ERM fundus images were analyzed with a 3D display coupled with the algorithm. The SSIM and PSNR of the model were 36.45±4.90 and 0.91±0.05, respectively, which indicated considerable improvements in image sharpness, brightness, and contrast. Visible ERM size and color contrast ratio were significantly enhanced in optimized images in the in-vitro studies. The results demonstrate that the developed algorithm can perform digital image enhancement effectively and has promise in the real-time applications during the 3D heads-up vitreoretinal surgeries.
Read MoreDoi: https://doi.org/10.54216/FPA.210210
Vol. 21 Issue. 2 PP. 159-169, (2026)
This paper examines the use of cryptography in block ciphers and assesses their security, with a focus on the Advanced Encryption Standards (AES). The study reviews key cryptanalytic techniques, including differential cryptanalysis (8.3%), linear cryptanalysis (4.2%), and integral cryptanalysis (4.2%). They give their share (in percentage) regarding the relative frequency in the cryptanalysis research literature from 2015 to 2024 according to their literature survey. Side-channel attacks showed the highest practical success rates, with some studies showing up to 50.0% effectiveness. Additionally, the study examines more sophisticated attack techniques such as meet-in-the-middle attacks, quantum-related threats, and biclique cryptanalysis (16.0%).The entire round AES is resistant to a wide range of attack techniques thanks to its strong diffusion and confusion mechanisms and reliable key schedule. The study concludes that cryptanalysis is essential for strengthening encryption schemes against emerging threats, particularly those resulting from quantum computing.
Read MoreDoi: https://doi.org/10.54216/FPA.210211
Vol. 21 Issue. 2 PP. 170-187, (2026)
Wireless Sensor Networks (WSNs) play a crucial role in monitoring and data collection for various real-time applications, including environmental surveillance, industrial automation, and smart cities. However, achieving energy efficiency and timely data delivery remains a critical challenge, especially in time-sensitive scenarios. This research presents the development of an efficient cluster-based hybrid routing protocol that combines the strengths of Low-Energy Adaptive Clustering Hierarchy (LEACH) and Threshold-sensitive Energy Efficient Network (TEEN) protocols to address these challenges. The proposed Hybrid LEACH-TEEN protocol dynamically adapts to both periodic and event-driven data transmission needs by integrating LEACH’s randomized cluster-head selection and TEEN’s threshold- based data transmission mechanism. This hybrid approach significantly reduces redundant transmissions and optimizes energy consumption across the network. Extensive simulations were conducted to evaluate the protocol’s performance in terms of network lifetime, stability period, energy consumption, and the number of alive nodes over time. Results demonstrate that the Hybrid protocol outperforms traditional LEACH and TEEN protocols, particularly in time- critical applications, by ensuring prompt response to critical events while maintaining energy-efficient operation. This work contributes to the design of intelligent and adaptive routing strategies for next- generation WSNs.
Read MoreDoi: https://doi.org/10.54216/FPA.210212
Vol. 21 Issue. 2 PP. 188-198, (2026)
To facilitate the practical deployment of robotics, efficient path planning is essential to ensure that robotic movement is accurate, safe, and goal-oriented. This study explores new approaches to map adaptation and path optimization for robot navigation between specified locations. The initial phase of the research involves designing an environment that enables the safe operation of robots. Subsequently, the collected data is processed to construct a graph using Dijkstra’s algorithm, which is employed to determine the shortest path between key points. When multiple paths are available, the algorithm selects the most efficient one, while ensuring safety in point-to-point transitions and when navigating around obstacles. In addition to this, a reinforced method is introduced to enhance the security of path planning. This approach expands the original trajectory to incorporate a safety buffer equal to half of the robot’s safety radius, thus maintaining a safe distance along the traveled route. The key contribution of this work lies in the development of novel maps featuring secure pathways, which can be utilized by optimization algorithms to improve navigation in unfamiliar terrains. Experimental results using PRM* and RRT* validate the accuracy of these maps, especially in complex, maze-like environments.
Read MoreDoi: https://doi.org/10.54216/FPA.210213
Vol. 21 Issue. 2 PP. 199-210, (2026)
Retinopathy of prematurity (ROP) remains the leading cause of blindness in children. The detection and treatment of this disease mainly depend on subjective evaluation of the features of retinal blood vessels. This method is not only time-consuming but also prone to errors. The increasing number of such cases demands an urgent need for automated models to improve the accuracy and efficiency of diagnosis and treatment. This paper presents a method for early detection of ROP using the Swin Transformer, a hierarchical vision transformer architecture. This work focuses solely on the screening stages for ROP, as documented between 2015 and 2020, based on a dataset composed of 3720 retinal images from preterm infants, kindly made available by the Al-Amal Eye Center located in Baghdad, Iraq. The proposed model achieved a classification accuracy of 98.67% on a clinical ROP dataset. The results highlight the importance of the most recent in-depth learning methods in enhancing early detection techniques, ultimately leading to improved clinical outcomes for at-risk infants.
Read MoreDoi: https://doi.org/10.54216/FPA.210215
Vol. 21 Issue. 2 PP. 228-240, (2026)
This study investigates the influence of age on second language acquisition by comparing language learning outcomes between young learners (aged 8–12) and adult learners (aged 25–40). Drawing on both cognitive and sociolinguistic perspectives, and leveraging data fusion techniques that integrate test results, classroom observations, and learner interviews, the research examines differences in pronunciation, grammar acquisition, vocabulary retention, and communicative competence. The fusion of multiple data modalities ensures a more holistic view of learner performance. Findings indicate that young learners exhibit greater native-like pronunciation and long-term retention, while adult learners outperform in grammatical accuracy and metalinguistic awareness. Motivational factors and learning environments also played significant roles. The study concludes that while age affects specific aspects of language learning, no age group holds a universal advantage. Data fusion-based insights highlight the need for age-sensitive instructional strategies that cater to the cognitive and emotional needs of learners at different stages.
Read MoreDoi: https://doi.org/10.54216/FPA.210214
Vol. 21 Issue. 2 PP. 211-227, (2026)
The complex nature, non-linear dynamics, and inherent volatility of stock markets make it difficult to provide accurate predictions. Recent developments in the area have shown the efficiency of some machine learning methodologies in predicting financial stock prices. However, emerging markets, such as Iraq, face additional challenges due to the lack of fundamental data needed to support predictive analysis. In this study, we present a novel framework that focuses on overcoming this issue and predicting the next-day closing prices of the Iraq Stock Exchange (ISX) main index, using only available historical closing prices to engineer 12 technical indicators. The goal is to compensate for the lack of important Open, High, and Low prices data while improving prediction accuracy. We used four machine-learning algorithms in the form of Random Forest (RF), Support Vector Machine (SVM), Artificial Neural Network (ANN), and K-Nearest Neighbor (KNN), which were optimized using grid search hyperparameter tuning technique. The performance of the models was evaluated using Root Mean Square Error (RMSE), Mean Absolute Percentage Error (MAPE), and Coefficient of Determination (R²). The comparison analysis resulted in the SVM with the linear kernel yielding the best performance (RMSE = 16.25, MAPE = 1.15, R² = 0.989), followed closely by the ANN (RMSE = 18.25), RF (RMSE = 26.76), then KNN (RMSE = 55.77). The current study introduces two main contributions: (1) the feasibility of using engineered features to achieve reliable predictions in markets with incomplete data, and (2) the critical role of using hyperparameter optimization to enhance models accuracy. The framework we propose provides a practical model for predicting stock prices in resource-constrained emerging markets.
Read MoreDoi: https://doi.org/10.54216/FPA.210216
Vol. 21 Issue. 2 PP. 241-258, (2026)
This paper addresses the challenge of predicting and analyzing electricity consumption patterns in Tetouan, Morocco, using time-series data. The dataset consists of 52,416 observations with 9 features, collected from the SCADA system of electricity consumption across three zones. The primary goal is to enhance forecasting accuracy and optimize prediction models through machine learning (ML) algorithms, including both timeseries models and advanced optimization techniques. We compare the performance of several baseline ML models, such as BiLSTM and Continuous Time Stochastic Modelling (CTSM), with their optimized versions, utilizing optimization algorithms like Greylag Goose Optimization (GGO), Bat Algorithm (BA), and Whale Optimization Algorithm (WOA). The results show that the optimized CTSM model, using GGO, achieved substantial improvements, including the lowest Mean Squared Error (MSE) of 7.09E-07 and the highest R² of 0.990, demonstrating superior accuracy and stability. The contributions of this work include (i) benchmarking various ML models for time-series forecasting, (ii) introducing the use of optimized CTSM with meta-heuristics, and (iii) evaluating model performance using a comprehensive set of statistical metrics.
Read MoreDoi: https://doi.org/10.54216/FPA.210217
Vol. 21 Issue. 2 PP. 259-282, (2026)
This study addresses the challenge of smart-home energy forecasting across multiple appliances under varying temperature and seasonal regimes, aiming to improve demand planning and household energy efficiency. The analysis leverages a 100,000-row dataset from Kaggle, encompassing appliance type, time of consumption, outdoor temperature, season, and household size. The study benchmarks several recurrent neural network models, including Long Short-Term Memory (LSTM), Bidirectional LSTM (BiLSTM), Gated Recurrent Unit (GRU), and Bidirectional RNN (BiRNN), as well as a feedforward Artificial Neural Network (ANN). A novel enhancement, the Evolutionary Attention-based LSTM (EALSTM), is introduced, and its hyperparameters are optimized using the Greylag Goose Optimization (GGO) algorithm. The performance of GGO-optimized EALSTM is compared to other metaheuristics, such as Differential Evolution (DE), Genetic Algorithm (GA), Quantum-Inspired Optimization (QIO), JAYA, Bat Algorithm (BA), and Stochastic Fractal Search (SFS). The results indicate that GGO-optimized EALSTM outperforms all other models, achieving superior accuracy across multiple metrics, including MSE, RMSE, MAE, r, R2 , RRMSE, NSE, and WI. Key contributions of the paper include (i) the establishment of an appliance- and season-aware forecasting benchmark, (ii) a comprehensive optimizer comparison for EALSTM using GGO, and (iii) the provision of actionable visual analytics to enhance the understanding of energy demand patterns and model errors.
Read MoreDoi: https://doi.org/10.54216/FPA.210218
Vol. 21 Issue. 2 PP. 283-305, (2026)
Forecasting the energy consumption of heating, ventilation, and air conditioning (HVAC) chillers is vital for enhancing building efficiency, reducing operating costs, and supporting sustainability goals. However, the task remains challenging due to nonlinear system dynamics, strong dependence on weather conditions, and the scarcity of high-quality real-world datasets. In this work, we employ the Chiller Energy Data from Kaggle, which contains 13,561 cleaned records collected between August 2019 and June 2020, incorporating ten operational and meteorological features. Six baseline models, namely the Evolutionary Attention-based Long Short-Term Memory (EALSTM), Bidirectional LSTM (BILSTM), standard LSTM, Gated Recurrent Unit (GRU), Temporal Convolutional Network (TCN), and Artificial Neural Network (ANN), are first benchmarked to assess their forecasting capability. To further improve predictive accuracy, we integrate EALSTM with ten meta-heuristic optimization algorithms, focusing on the Greylag Goose Optimization Algorithm (GGO) and comparing it with alternatives such as Harris Hawks Optimization (HHO), Artificial Physics Optimization (APO), Simulated Annealing Optimization (SAO), Grey Wolf Optimizer (GWO), and others. The optimized GGO+EALSTM framework achieves state-of-the-art performance with a mean squared error of 6.83×10−6 and an R2 value of 0.98, reflecting a 96% reduction in error relative to simple feedforward models and significant improvements over other recurrent networks and optimizer-enhanced variants. The main contributions of this study include a structured benchmarking of neural architectures for chiller forecasting, the first systematic comparison of ten meta-heuristic optimizers applied to deep learning in this domain, and a visualization-based error analysis that strengthens interpretability and supports practical deployment. These results establish optimization-enhanced EALSTM as a robust and generalizable framework for HVAC energy forecasting, paving the way toward more efficient, reliable, and sustainable building energy management.
Read MoreDoi: https://doi.org/10.54216/FPA.210219
Vol. 21 Issue. 2 PP. 306-326, (2026)
In recent years, EEG based recognition and characterization of brain states has received much interest due to the advances in deep learning and machine learning methods. The non-invasive and highly inexpensive activity of EEG presents a patient with details concerning the activity and the conditions of the brain. The synthesis of artificial intelligence (AI) models (convolutional neural networks (CNNs), long short-term memory networks (LSTMs), and collaborative knowledge options has been explored in a series of studies that recognize the mental state accurately in a large number of cases. The literature focuses on introducing strong, explainable models as well as on multimodal data to boost classification accurateness and reliability. The results are a 1D CNN and a LSTM network were trained separately and in a hybrid, architecture (CNN-LSTM) to classify the EEG signals. The models were appraised using accurateness, accuracy, recollection, F1-score, and confusion matrix analysis.
Read MoreDoi: https://doi.org/10.54216/FPA.210220
Vol. 21 Issue. 2 PP. 327-335, (2026)
Floods are among the most devastating natural disasters, causing widespread damage to infrastructure, homes, and human lives. Rapid assessment of flood severity is critical for effective disaster response and resource allocation. This study explores several deep learning approaches for flood water level classification using UAV imagery. A curated dataset of 2,000 UAV images from diverse regions, including India, the United States, and Brazil, was developed and augmented to improve generalization. Multiple architectures were evaluated, including pre-trained CNNs, ResNet50v2, MobileNetv2, Vision Transformers, and Swin Transformers, with and without the Convolutional Block Attention Module (CBAM) and adaptive learning strategies. Experimental results reveal that integrating Vision Transformers with CBAM achieves the highest classification accuracy of 90.6%, while a hybrid CNN–Vision Transformer model further improves performance to 92.3%. These findings highlight the potential of attention-based hybrid models for precise flood severity mapping. The proposed framework can aid rescue teams and disaster management authorities by prioritizing high-risk areas, enabling faster response and optimized allocation of resources during emergency operations.
Read MoreDoi: https://doi.org/10.54216/FPA.210221
Vol. 21 Issue. 2 PP. 336-352, (2026)
Health reconnaissance frameworks are currently a more significant issue and examination subject. A few applications, like military, home consideration, medical clinic, athletic preparation, and the crisis control framework, have been laid out for wellbeing observation research. Competitors' lives require a lot of activity and exercise for wellness and wellbeing. The capacity to screen the imperative indications of the competitor that mirror the physical and physiological state of the individual, particularly during an apprenticeship, is fundamental both for the competitor and for the mentor to keep away from overtraining, wounds, and sickness or to change the power and time as per the information estimated — wearable checking gadgets associated with remote correspondence advances. In the model, utilizing remote innovations implies that devices utilized by competitors discuss information with other remote hubs progressively and make a small correspondence organization. The utilization of remote sensor correspondence and the need to impart between sensors has prompted the formation of wireless sensor networks (WSN) and wireless body area networks (WBANs). This paper presented a wireless sensor network-based athlete health monitoring (WSN-AHM) method and concentrated on their growth phases. Since it is a remote and versatile wellbeing reconnaissance arrangement, it can give medical care specialist organizations a valuable remote checking device to diminish the expense of their administrations. WSNs and their correspondence advancements and principles can be utilized in these reconnaissance applications, accentuating wearing exercises through the entire and relative show of realities on well-known correspondence conventions.
Read MoreDoi: https://doi.org/10.54216/FPA.210222
Vol. 21 Issue. 2 PP. 353-368, (2026)
Recently, irrigation management has been considered one of the most significant areas of research in smart vertical farming. Hence, it is essential to optimize freshwater usage for smart vertical farming management due to the lack of freshwater sources. It is observed that the soil moisture level and temperature data need to be appropriately examined and analyzed to predict the water irrigation level in a smart farming platform. Hence, in this work, the Internet of Things (IoT) sensors have been utilized to collect and monitor the soil moisture level, ambient temperature level, and humidity level data effectively. Besides, the collected sensor information has been analyzed and predicted to recognize the appropriate utilization of the optimum level of freshwater using Grey Wolf optimizer integrated recurrent network models. Therefore, this approach successfully analyzes the sensors' data and predicts the required level of irrigation based on motor ON and OFF conditions. The generated data from the sensor has been evaluated using the Keras model using the python language, and the performance is assessed based on the accuracy ratio. This model obtained a maximum of (0.995%) accuracy in forecasting the optimum irrigation level. The proposed system will utilize less voltage to minimize the power consumption ratio up to 35% in the irrigation process with 99.5% accuracy in forecasting the optimum irrigation level.
Read MoreDoi: https://doi.org/10.54216/FPA.210223
Vol. 21 Issue. 2 PP. 369-382, (2026)
This study proposes an Intelligent Tutoring System (ITS) to enhance hand-knitting skills among Home Economics students through AI-driven personalized learning, addressing the limitations of traditional generic methods. The system integrates computer vision, adaptive algorithms, and interactive tutorials to provide real-time feedback and track progress. A study involving 60 students (30 control, 30 experimental) showed the ITS group achieved significantly higher post-test scores, confirming improved proficiency and engagement. Results reveal that the IT IS effectively accelerates skill acquisition and deepens understanding compared to conventional instruction.
Read MoreDoi: https://doi.org/10.54216/FPA.210224
Vol. 21 Issue. 2 PP. 383-295, (2026)
Sign language is a vital communication mean for hearing-impaired individuals, combining manual gestures with non-manual signs like facial expressions and body movements, often requiring both hands and sequential actions. Recently, an automatic Sign Language Recognition (SLR) has gained increasing attention, with Machine Learning and Deep Learning systems achieving competitive performance. While convolutional neural network has been widely employed owing to their effectiveness in image-based recognition tasks, existing methods, however, often struggle with efficiency, adaptability, and real-time deployment. This paper proposes an Internet of Things-Integrated Deep Learning Model for Real-Time SLR to enhance the communication among individuals with hearing-impairment and non-signers. The framework employs IoT-based wearable sensors for capturing hand and finger movements, followed by Sobel filtering for noise reduction. MobileNetV3 is applied for lightweight feature extraction, while a Variational AutoEncoder enables robust sign detection. To further improve performance, an Improved Sparrow Search Algorithm is introduced for hyperparameter tuning, constituting the novelty of this work. Experimental results show that the proposed framework achieves an outstanding accuracy of 99.05% when compared to state-of-the-art systems, validating its robustness and effectiveness for real-time SLR applications. Future work will explore large-scale deployment and multi-language adaptability.
Read MoreDoi: https://doi.org/10.54216/FPA.210225
Vol. 21 Issue. 2 PP. 396-412, (2026)