This study delves into the relationship between cyber-physical systems (CPS) and economic security, with particular emphasis on how networking technologies facilitate more efficient data integration. It investigates how CPS adoption is reshaping national economies by influencing productivity levels, altering labor market structures, and introducing new cybersecurity challenges. Employing a hybrid research design that merges cross-sectional data evaluation with expert consultations, the research offers a comprehensive view of the implications of CPS implementation on sectoral productivity, employment trends, and macroeconomic resilience.CPS are positioned in the study as strategic innovations powered by data intelligence, underlining both their promising opportunities and associated threats. The findings support the development of informed policy measures that aim to enhance benefits while reducing potential risks. Ultimately, the work contributes to the evolving discourse on CPS by offering a balanced analysis of their socio-economic impacts and outlining actionable recommendations for decision-makers and industry stakeholders to capitalize on CPS innovations effectively.
Read MoreDoi: https://doi.org/10.54216/FPA.200101
Vol. 20 Issue. 1 PP. 01-11, (2025)
Data compression technologies play a big role in various areas where efficient data storage and transmission are essential. Data compression is the science of reducing redundant data to a compact form, which used to safely store files or information. On the other side, Unicode is a global standard for the representation of text and symbols in computers. The basic elements of the Unicode standard are code points, which represent a specific symbol. Unicode provides a unified way to map and manage these points to ensure consistent representation and interpretation of text data across different systems, platforms, and languages. This paper proposes a method to compress texts in Arabic, based on Unicode ligatures, which typically join characters together. This method replaces two or more Unicode Arabic ligature characters with a single Unicode Arabic ligature based on their appearance in the Arabic text file, eliminating the need for coding or decoding. The size of the original and output text files has been compared to show the percentage of compression. The selected dataset: Modern Standard Arabic text involves Arabic news, and Classical Arabic text involves Arabic Holy and Honorific text collected from Kaggle. The percentage of compression depends on the frequency of ligature characters in Arabic documents. Unfortunately, the results were not promising, as the method was only able to compress the file to a very small percentage (6.71 %and 12.82 %, respectively, for Arabic news and Arabic Holy text). We think that the proposed method can be improved by using a hybrid technique of text compression in the future; in addition, consider other properties of Arabic Unicode. Programming can express competency concepts in a well-defined mathematical model for a particular.
Read MoreDoi: https://doi.org/10.54216/FPA.200102
Vol. 20 Issue. 1 PP. 12-23, (2025)
The hospitality industry is rapidly evolving, with intense competition among organizations striving to attract and retain customers. One of the key factors influencing customer satisfaction and loyalty is the emotional intelligence of employees. Higher emotional intelligence fosters positive behavior, which enhances customer experience and engagement. This study aims to identify and prioritize the most critical factors and sub-factors of emotional intelligence in the private hospitality sector. Data for this research has been collected from hospitality businesses in the Lucknow region. The prioritization process is carried out using the Analytical Hierarchy Process (AHP), a widely used multi-criteria decision-making (MCDM) technique. The rankings derived from AHP provide valuable insights into the key attributes of emotional intelligence that employees should focus on for professional growth. By understanding these priorities, hospitality employees can enhance their emotional intelligence, leading to improved customer interactions, better teamwork, and overall organizational success.
Read MoreDoi: https://doi.org/10.54216/FPA.200103
Vol. 20 Issue. 1 PP. 24-33, (2025)
Visible light communication (VLC) integrated with nonorthogonal multiple access (NOMA) is a promising technique to meet the increasing demand for high capacity, energy-efficient communication in forthcoming 6G networks. This work thoroughly evaluates VLC-NOMA systems and emphasizes the incorporation of machine learning (ML) approaches to improve spectrum efficiency, the bit error rate, and resource allocation. A technique based on Preferred Reporting Items for Systematic Reviews and Meta-analyses produced 244 records, among which 45 were selected for comprehensive study. The review identified obstacles, including scalability, computational complexity, and insufficient experimental validation. A comparative examination elucidated the strengths and limits of machine learning methodologies, including machine learning, deep neural networks, and federated learning, in addressing these difficulties. The study identified key research gaps, proposed future directions, and emphasized the need for hybrid optimization techniques, lightweight machine learning models, and real-world implementations. The findings contribute to the development of robust, scalable VLC-NOMA systems for 6G applications.
Read MoreDoi: https://doi.org/10.54216/FPA.200104
Vol. 20 Issue. 1 PP. 34-54, (2025)
Business executives and scholars maintain that Artificial Intelligence (AI) is positioned alongside pivotal human inventions and advancements such as fire, electricity, and the incandescent light bulb. By harnessing AI technologies, academic institutions can augment pedagogical approaches, elevate the caliber of education, and furnish learners with novel avenues to cultivate their proficiencies and competencies. However, on the contrary, the implementation of AI in higher education has provoked deliberations regarding whether institutions ought to prohibit its utilization entirely or promote its integration to enhance educational outcomes. Nevertheless, despite the escalating acknowledgment of AI's importance in the educational sphere, there needs to be more thorough exploration concerning its adoption and comprehending its impacts. Data was collected from 300 respondents to fill this gap by building on the 'Unified Theory of Acceptance and Use of Technology' (UTAUT) model. We empirically contribute to the existing literature by clarifying the fundamental factors that affect the adoption of AI within higher education, in addition to scrutinizing the consequences of AI on knowledge acquisition. Moreover, we elucidate the moderating effects of workload and temporal limitations. The findings provide substantial insights relevant to the incorporation of AI for knowledge acquisition in higher education and are anticipated to provoke further scholarly discussion.
Read MoreDoi: https://doi.org/10.54216/FPA.200105
Vol. 20 Issue. 1 PP. 55-67, (2025)
The current study introduces a trainable object detection model that can be taught to detect an object of a given class within an unconstrained scene. The researchers of the current study use this advanced system in the detection of Relics images, which involves a calculation of Local 3bit Binary Patterns (3bit-LBP). The key highlights of the current work include the integration and analyses of the utilization of the Multi-Support Vector Machine Classification (MSVMC) and Integral image computation analysis. The experimental outcomes of the current study indicate that the method of 3bit-LBP is superior to other methods in accuracy and stability, especially when images of different illumination and object rotation were tested. The researchers further conducted a comparative performance evaluation showing that the presented system gives better detection rates as compared to the conventional strategies, revealing the efficiency in real-world applications. Finally, it is important to note that the implications of the results can be applied to uses beyond just relic detection. To conclude, the current work advances the knowledge of how to improve the functionality of object recognition algorithms further in the context of image recognition systems.
Read MoreDoi: https://doi.org/10.54216/FPA.200106
Vol. 20 Issue. 1 PP. 68-76, (2025)
Genetic diseases are diseases produced by anomalies in the DNA of the person. These abnormalities may be larger-scale chromosomal mutations or irregularities in the particular gene. These diseases significantly influence some body functions and systems and are hereditary or develop automatically. Traditional models such as genetic testing and karyotyping might fail to identify complex or rare modifications, requesting more detailed techniques namely whole-genome sequencing (WGS). In recent decades, regardless of important technological evolution, uncommon genetic diseases continue to cause problems, with a significant portion of patients (50–66%) remaining unidentified according to clinical condition alone. An accurate analysis is important to provide equal support to patients and their relations, despite particular therapeutic intrusions. Presently, machine learning (ML), and in detail the DL subspecialties, have been utilized to determine clinically relevant prediction devices in other medical areas. For mental disorders, ML methods have presented major promise in forecasting either diagnosis or prediction in mental disorders. In this manuscript, we design and develop a Hybrid Deep Learning and Metaheuristic Optimization Algorithm for Detecting Genetic Disorders (HDLMOA-DGD) model. The proposed HDLMOA-DGD algorithm's main goal is to detect and classify genetic disorders using an advanced deep-learning model. At first, the Z-score normalization is employed in the data pre-processing phase for converting an input data into a uniform format. Moreover, the proposed HDLMOA-DGD model implements a hybrid deep learning model of the temporal convolutional network, bi-directional long- and short-term memory network, and Self-Attention mechanism (TCN-BiLSTM-SA) technique for the classification process. At last, the modified gannet optimization algorithm (MGOA)-based hyperparameter selection process is performed to optimize the detection and classification results of the TCN-BiLSTM-SA system. The experimental validation of the HDLMOA-DGD model is verified on a benchmark dataset and the results are determined regarding several measures. The experimental outcome underlined the development of the HDLMOA-DGD model in the genetic disorder detection process.
Read MoreDoi: https://doi.org/10.54216/FPA.200107
Vol. 20 Issue. 1 PP. 77-89, (2025)
The proposed method creates an advanced Deep Residual Convolutional Neural Network (DR-CNN) for finger vein pattern recognition to enhance both accuracy and computational efficiency of the system. The framework implements DR-CNN to handle the reduction of dimensions together with feature extraction while resolving traditional CNN models' overfitting issues. This research utilizes 6,000 images from the VERA and PLUSVein FV3 and MMCBNU_6000 and UTFV databases which form 80% training data and 20% testing data. The ImageNet training includes 4 pooling layers while also using 4 fully connected layers as well as 13 convolutional layers. The DR-CNN classifier achieves optimal authentication-performance through its implementation of Gray Level Co-occurrence Matrices (GLCM) and Scale-Invariant Feature Transform (SIFT) for extracting features. A performance assessment based on accuracy, sensitivity, specificity, F1-score, false acceptance rate (FAR) and false rejection rate (FRR) proves that DR-CNN surpasses traditional techniques. With its implementation of 5,000 images the proposed model demonstrates better accuracy (94.39%) than CNN (92.45%), RNN (88.99%) and DNN (85.91%). Tests show that the system processes 25,000 images within 2.43 milliseconds establishing fast computation speeds. DR-CNN achieves robustness through minimum mean absolute error values of 19.34. The proposed DR-CNN model delivers a 97.8% recognition rate together with a 0.83% error rate which proves its effectiveness for biometric security applications.
Read MoreDoi: https://doi.org/10.54216/FPA.200108
Vol. 20 Issue. 1 PP. 90-113, (2025)