This study delves into the relationship between cyber-physical systems (CPS) and economic security, with particular emphasis on how networking technologies facilitate more efficient data integration. It investigates how CPS adoption is reshaping national economies by influencing productivity levels, altering labor market structures, and introducing new cybersecurity challenges. Employing a hybrid research design that merges cross-sectional data evaluation with expert consultations, the research offers a comprehensive view of the implications of CPS implementation on sectoral productivity, employment trends, and macroeconomic resilience.CPS are positioned in the study as strategic innovations powered by data intelligence, underlining both their promising opportunities and associated threats. The findings support the development of informed policy measures that aim to enhance benefits while reducing potential risks. Ultimately, the work contributes to the evolving discourse on CPS by offering a balanced analysis of their socio-economic impacts and outlining actionable recommendations for decision-makers and industry stakeholders to capitalize on CPS innovations effectively.
Read MoreDoi: https://doi.org/10.54216/FPA.200101
Vol. 20 Issue. 1 PP. 01-11, (2025)
Data compression technologies play a big role in various areas where efficient data storage and transmission are essential. Data compression is the science of reducing redundant data to a compact form, which used to safely store files or information. On the other side, Unicode is a global standard for the representation of text and symbols in computers. The basic elements of the Unicode standard are code points, which represent a specific symbol. Unicode provides a unified way to map and manage these points to ensure consistent representation and interpretation of text data across different systems, platforms, and languages. This paper proposes a method to compress texts in Arabic, based on Unicode ligatures, which typically join characters together. This method replaces two or more Unicode Arabic ligature characters with a single Unicode Arabic ligature based on their appearance in the Arabic text file, eliminating the need for coding or decoding. The size of the original and output text files has been compared to show the percentage of compression. The selected dataset: Modern Standard Arabic text involves Arabic news, and Classical Arabic text involves Arabic Holy and Honorific text collected from Kaggle. The percentage of compression depends on the frequency of ligature characters in Arabic documents. Unfortunately, the results were not promising, as the method was only able to compress the file to a very small percentage (6.71 %and 12.82 %, respectively, for Arabic news and Arabic Holy text). We think that the proposed method can be improved by using a hybrid technique of text compression in the future; in addition, consider other properties of Arabic Unicode. Programming can express competency concepts in a well-defined mathematical model for a particular.
Read MoreDoi: https://doi.org/10.54216/FPA.200102
Vol. 20 Issue. 1 PP. 12-23, (2025)
The hospitality industry is rapidly evolving, with intense competition among organizations striving to attract and retain customers. One of the key factors influencing customer satisfaction and loyalty is the emotional intelligence of employees. Higher emotional intelligence fosters positive behavior, which enhances customer experience and engagement. This study aims to identify and prioritize the most critical factors and sub-factors of emotional intelligence in the private hospitality sector. Data for this research has been collected from hospitality businesses in the Lucknow region. The prioritization process is carried out using the Analytical Hierarchy Process (AHP), a widely used multi-criteria decision-making (MCDM) technique. The rankings derived from AHP provide valuable insights into the key attributes of emotional intelligence that employees should focus on for professional growth. By understanding these priorities, hospitality employees can enhance their emotional intelligence, leading to improved customer interactions, better teamwork, and overall organizational success.
Read MoreDoi: https://doi.org/10.54216/FPA.200103
Vol. 20 Issue. 1 PP. 24-33, (2025)
Visible light communication (VLC) integrated with nonorthogonal multiple access (NOMA) is a promising technique to meet the increasing demand for high capacity, energy-efficient communication in forthcoming 6G networks. This work thoroughly evaluates VLC-NOMA systems and emphasizes the incorporation of machine learning (ML) approaches to improve spectrum efficiency, the bit error rate, and resource allocation. A technique based on Preferred Reporting Items for Systematic Reviews and Meta-analyses produced 244 records, among which 45 were selected for comprehensive study. The review identified obstacles, including scalability, computational complexity, and insufficient experimental validation. A comparative examination elucidated the strengths and limits of machine learning methodologies, including machine learning, deep neural networks, and federated learning, in addressing these difficulties. The study identified key research gaps, proposed future directions, and emphasized the need for hybrid optimization techniques, lightweight machine learning models, and real-world implementations. The findings contribute to the development of robust, scalable VLC-NOMA systems for 6G applications.
Read MoreDoi: https://doi.org/10.54216/FPA.200104
Vol. 20 Issue. 1 PP. 34-54, (2025)
Business executives and scholars maintain that Artificial Intelligence (AI) is positioned alongside pivotal human inventions and advancements such as fire, electricity, and the incandescent light bulb. By harnessing AI technologies, academic institutions can augment pedagogical approaches, elevate the caliber of education, and furnish learners with novel avenues to cultivate their proficiencies and competencies. However, on the contrary, the implementation of AI in higher education has provoked deliberations regarding whether institutions ought to prohibit its utilization entirely or promote its integration to enhance educational outcomes. Nevertheless, despite the escalating acknowledgment of AI's importance in the educational sphere, there needs to be more thorough exploration concerning its adoption and comprehending its impacts. Data was collected from 300 respondents to fill this gap by building on the 'Unified Theory of Acceptance and Use of Technology' (UTAUT) model. We empirically contribute to the existing literature by clarifying the fundamental factors that affect the adoption of AI within higher education, in addition to scrutinizing the consequences of AI on knowledge acquisition. Moreover, we elucidate the moderating effects of workload and temporal limitations. The findings provide substantial insights relevant to the incorporation of AI for knowledge acquisition in higher education and are anticipated to provoke further scholarly discussion.
Read MoreDoi: https://doi.org/10.54216/FPA.200105
Vol. 20 Issue. 1 PP. 55-67, (2025)
The current study introduces a trainable object detection model that can be taught to detect an object of a given class within an unconstrained scene. The researchers of the current study use this advanced system in the detection of Relics images, which involves a calculation of Local 3bit Binary Patterns (3bit-LBP). The key highlights of the current work include the integration and analyses of the utilization of the Multi-Support Vector Machine Classification (MSVMC) and Integral image computation analysis. The experimental outcomes of the current study indicate that the method of 3bit-LBP is superior to other methods in accuracy and stability, especially when images of different illumination and object rotation were tested. The researchers further conducted a comparative performance evaluation showing that the presented system gives better detection rates as compared to the conventional strategies, revealing the efficiency in real-world applications. Finally, it is important to note that the implications of the results can be applied to uses beyond just relic detection. To conclude, the current work advances the knowledge of how to improve the functionality of object recognition algorithms further in the context of image recognition systems.
Read MoreDoi: https://doi.org/10.54216/FPA.200106
Vol. 20 Issue. 1 PP. 68-76, (2025)
Genetic diseases are diseases produced by anomalies in the DNA of the person. These abnormalities may be larger-scale chromosomal mutations or irregularities in the particular gene. These diseases significantly influence some body functions and systems and are hereditary or develop automatically. Traditional models such as genetic testing and karyotyping might fail to identify complex or rare modifications, requesting more detailed techniques namely whole-genome sequencing (WGS). In recent decades, regardless of important technological evolution, uncommon genetic diseases continue to cause problems, with a significant portion of patients (50–66%) remaining unidentified according to clinical condition alone. An accurate analysis is important to provide equal support to patients and their relations, despite particular therapeutic intrusions. Presently, machine learning (ML), and in detail the DL subspecialties, have been utilized to determine clinically relevant prediction devices in other medical areas. For mental disorders, ML methods have presented major promise in forecasting either diagnosis or prediction in mental disorders. In this manuscript, we design and develop a Hybrid Deep Learning and Metaheuristic Optimization Algorithm for Detecting Genetic Disorders (HDLMOA-DGD) model. The proposed HDLMOA-DGD algorithm's main goal is to detect and classify genetic disorders using an advanced deep-learning model. At first, the Z-score normalization is employed in the data pre-processing phase for converting an input data into a uniform format. Moreover, the proposed HDLMOA-DGD model implements a hybrid deep learning model of the temporal convolutional network, bi-directional long- and short-term memory network, and Self-Attention mechanism (TCN-BiLSTM-SA) technique for the classification process. At last, the modified gannet optimization algorithm (MGOA)-based hyperparameter selection process is performed to optimize the detection and classification results of the TCN-BiLSTM-SA system. The experimental validation of the HDLMOA-DGD model is verified on a benchmark dataset and the results are determined regarding several measures. The experimental outcome underlined the development of the HDLMOA-DGD model in the genetic disorder detection process.
Read MoreDoi: https://doi.org/10.54216/FPA.200107
Vol. 20 Issue. 1 PP. 77-89, (2025)
The proposed method creates an advanced Deep Residual Convolutional Neural Network (DR-CNN) for finger vein pattern recognition to enhance both accuracy and computational efficiency of the system. The framework implements DR-CNN to handle the reduction of dimensions together with feature extraction while resolving traditional CNN models' overfitting issues. This research utilizes 6,000 images from the VERA and PLUSVein FV3 and MMCBNU_6000 and UTFV databases which form 80% training data and 20% testing data. The ImageNet training includes 4 pooling layers while also using 4 fully connected layers as well as 13 convolutional layers. The DR-CNN classifier achieves optimal authentication-performance through its implementation of Gray Level Co-occurrence Matrices (GLCM) and Scale-Invariant Feature Transform (SIFT) for extracting features. A performance assessment based on accuracy, sensitivity, specificity, F1-score, false acceptance rate (FAR) and false rejection rate (FRR) proves that DR-CNN surpasses traditional techniques. With its implementation of 5,000 images the proposed model demonstrates better accuracy (94.39%) than CNN (92.45%), RNN (88.99%) and DNN (85.91%). Tests show that the system processes 25,000 images within 2.43 milliseconds establishing fast computation speeds. DR-CNN achieves robustness through minimum mean absolute error values of 19.34. The proposed DR-CNN model delivers a 97.8% recognition rate together with a 0.83% error rate which proves its effectiveness for biometric security applications.
Read MoreDoi: https://doi.org/10.54216/FPA.200108
Vol. 20 Issue. 1 PP. 90-113, (2025)
In applications related to military operations, Wireless Sensor Military Networks (WSMNs) aid a critical function by deploying a distributed group of sensor nodes. Such sensor networks lift the overall effectiveness of military activities by situational alertness and permitting instantaneous decision-making processes. This deployment also rises noteworthy challenges, namely scalability, energy efficiency, and security vulnerabilities. Ensuring the accessibility, trustfulness and confidentiality of the data sensed by sensor nodes is prime important challenge. It could lead to disastrous consequences on the military field. Looking into this shortfall, ongoing research is mainly targeted at obtaining advanced solutions to such challenges, such as secure and energy-efficient routing algorithms. However, one of the considerable challenges in WSNs is anomaly detection and the existence of false alarms. This can affect the dependability and effectiveness of the system. The ongoing research in this field focuses on exploring the condition of WSMN, mainly their applications, challenges, and future directions. Authors propose an adaptive and hybrid Machine Learning (ML) approach to reduce false alarms and anomaly detection along considering mutual authentication system. ML approaches offer reliable solutions by improving the data classification accuracy and detection of anomalies. These algorithms have better capability to distinguish between normal and abnormal events, which ultimately reduces false triggers. The authors propose a hybrid approach of k-Nearest Neighbors (KNN) and Decision Tree (DT), which results in a powerful method for improved classification accurateness and robustness in WSN. The effectiveness of KNN in local decision-making and better clear interpretability of Decision Tree to handle feature interactions are combined together in this strategy, to increase overall performance.
Read MoreDoi: https://doi.org/10.54216/FPA.200109
Vol. 20 Issue. 1 PP. 114-130, (2025)
In recent years, the tourism industry has increasingly embraced advanced technologies to deliver highly personalized travel experiences. This paper proposes the development of an AI-powered Personalized Tourism Recommendation System (PTRS), to be piloted in Samarkand, Uzbekistan—a city renowned for its rich cultural and historical heritage. The system leverages artificial intelligence techniques alongside multi-source data fusion to generate dynamic and context-aware travel recommendations. By integrating diverse data sources—including user preferences, weather conditions, seasonal trends, and geographic factors—the system provides adaptive recommendations tailored to individual tourist profiles. A combination of recommendation algorithms, such as cosine similarity, Pearson correlation, and matrix factorization, is employed to optimize the accuracy and relevance of suggestions. Performance evaluation is conducted using standard metrics, including Root Mean Square Error (RMSE), Mean Absolute Error (MAE), Coefficient of Determination (R²), and Mean Squared Error (MSE). The results underscore the effectiveness of incorporating AI and data fusion in enhancing smart tourism systems, paving the way for more intelligent and user-centric travel experiences in culturally rich destinations like Samarkand.
Read MoreDoi: https://doi.org/10.54216/FPA.200110
Vol. 20 Issue. 1 PP. 131-140, (2025)
The nature of images can differ in texture, contrast, illumination, noise levels, and structural patterns. The descriptor suitable for one image may not be optimal for another. Therefore, this paper proposes a new hybrid diagnostic model that combines multi-descriptor feature extraction with a Deep Belief Network. It is used to classify Acute Lymphoblastic Leukaemia. The proposed model consists of two phases: feature extraction and classification. Three descriptors, Histogram of Oriented Gradients, Scale-Invariant Feature Transform, and Convolutional Neural Network are employed in the feature extraction phase. Each descriptor captures different aspects of the image using distinct computational techniques. The Deep Belief Network was trained on each group of features individually. Three trained Deep Belief Network were produced with each data extract by descriptors. The membership function between the training set and the test data determines which DBN will be selected. The model was tested and evaluated on the 10,661 Leukaemia images of the C-NMC_Leukaemia dataset. It consists of two classes of images: 7272 images of Leukaemia cancer and 3389 of the Benign. Experimental results showed that the proposed model achieved an accuracy outperforming several recent methods. The accuracy of the proposed model reaches 96.87%, while the best accuracy of the recent works is 94.91%.
Read MoreDoi: https://doi.org/10.54216/FPA.200111
Vol. 20 Issue. 1 PP. 141-154, (2025)
The significant increase in the volume of recently released records and multimedia news that is available presents fresh issues for pattern-recognition and machine-learning, particularly in addressing the longstanding issue of recognizing handwritten digits. Handwriting-recognition is a captivating area of research due to the uniqueness of each individual's handwriting style. It involves a computer's ability that automatically identify and comprehend handwritten (digit or character). Hyper parameters play a crucial role in the performance of machine learning algorithms, directly influencing the training process and significantly affecting the resulting model's performance. This work introduce a general automated hyper parameter tuning mechanics were used to optimize the random forest parameters, which are: grid- random search and Bayesian optimization applying on MNIST digit database (images) that have already been pre-processed. These proposed methods successfully identify optimal hyper parameters across a wide variety of ML models, taking into consideration the time cost of the search. This work shows the effectiveness and efficiency of used techniques, crucial for real-world applications. The results of this study show an accuracy rate of 99.3% for the Grid Search model, 98.8% for the Random Search model, and 96.0% for Bayesian Optimization on random forest algorithm.
Read MoreDoi: https://doi.org/10.54216/FPA.200112
Vol. 20 Issue. 1 PP. 155-165, (2025)
This study affords a deep autoencoder-primarily based framework for anomaly detection in multispectral satellite tv for pc imagery, addressing vital challenges in environmental monitoring and disaster response. Utilizing datasets from Sentinel-2, Landsat-eight, and MODIS, the version employs a hybrid loss function (MSE+MS-SSIM) and spatial attention mechanisms to discover and localize anomalies consisting of wildfires, floods, and urban encroachment. Experimental outcomes display superior overall performance (F1-Score: 0.84, AUC-ROC: 0.93) compared to PCA and Isolation Forest baselines, with precise anomaly localization demonstrated thru errors heatmaps and IoU metrics. The framework’s integration with early warning structures highlights its capability for actual-time applications, although boundaries in managing seasonal versions and occasional-decision information underscore the want for future paintings in multi-modal fusion and semi-supervised studying. This study advances scalable solutions for sustainable land control and emergency response, leveraging open-supply satellite data for global accessibility.
Read MoreDoi: https://doi.org/10.54216/FPA.200113
Vol. 20 Issue. 1 PP. 166-178, (2025)
Artificial Intelligence's remarkable advancement and Natural Language Processing enabled innovations that fulfill various vertical requirements. News summarization has become a popular topic where systems extract valuable semantic content and generate shorter abstracts from the original content. News readers benefit from a quick understanding of essential details because an informative summary provides them with important points without forced reading of the whole article. This article covers essential NLP news summarization methods, including Abstractive summarization, Extractive summarization, and Hybrid summarization, together with recent datasets, evaluation metrics, applications and future challenges. The main benefit of this work serves both researchers by providing them with complete information about contemporary summarization developments to select suitable summarization models during application development.
Read MoreDoi: https://doi.org/10.54216/FPA.200114
Vol. 20 Issue. 1 PP. 179-192, (2025)