Online social networks continue to evolve, serving a variety of purposes, such as sharing educational content, chatting, making friends and followers, sharing news, and playing online games. However, the widespread flow of unwanted messages poses significant problems, including reducing online user interaction time, extremist views, reducing the quality of information, especially in the educational field. The use of coordinated automated accounts or robots on social networking sites is a common tactic for spreading unwanted messages, rumors, fake news, and false testimonies for mass communication or targeted users. Since users (especially in the educational field) receive many messages through social media, they often fail to recognize the content of unwanted messages, which may contain harmful links, malicious programs, fake accounts, false reports, and misleading opinions. Therefore, it is vital to regulate and classify disturbing texts to enhance the security of social media. This study focuses on building an Arabic disturbing message dataset extracted from Twitter, which consists of 14,250 tweets. Our proposed methodology includes applying new tag identification technology to collected tweets. Then, we use prevailing machine learning algorithms to build a model for classifying disturbing messages in Arabic, using effective parameter tuning methods to obtain the most suitable parameters for each algorithm. In addition, we use particle swarm optimization to identify the most relevant features to improve the classification performance. The results indicate a clear improvement in the classification performance from 0.9822 to 0.98875, with a 50% reduction in the feature set. Our study focuses on Arabic spam messages, classifying spam messages, tuning effective parameters, and selecting features as key areas of investigation.
Read MoreDoi: https://doi.org/10.54216/FPA.160101
Vol. 16 Issue. 1 PP. 08-22, (2024)
Gesture recognition for visually challenged people plays a vital role in improving their convenience and interaction with digital gadgets and environments. It includes improvement of systems that permit them to relate with digital devices by using hand actions or gestures. To improve user-friendliness, these systems select in-built and effortlessly learnable gestures, often integrating wearable devices prepared with sensors for precise detection. Incorporating auditory or haptic feedback devices offers real-time cues about achievement of familiar gestures. Machine learning (ML) and deep learning (DL) methods are useful tools for accurate gesture detection, with customization choices to accommodate individual preferences. In this view, this article concentrates on design and development of Automated Gesture Recognition using Zebra Optimization Algorithm with Deep Learning (AGR-ZOADL) model for Visually Challenged People. The AGR-ZOADL technique aims to recognize the gestures to aid visually challenged people. In the AGR-ZOADL technique, the primary level of data pre-processing is involved by median filtering (MF). Besides, the AGR-ZOADL technique applies NASNet model to learn complex features from the preprocessed data. To enhance performance of NASNet technique, ZOA based hyperparameter procedure performed. For gesture recognition process, stacked long short term memory (SLSTM) model is applied. The performance validation of AGR-ZOADL technique carried out using a benchmark dataset. The experimental values stated that AGR-ZOADL methodology extents significant performance over other present approaches
Read MoreDoi: https://doi.org/10.54216/FPA.160102
Vol. 16 Issue. 1 PP. 23-36, (2024)
Brain-computer interface (BCI) is a procedure of connecting the central nervous system to the device. In the past few years, BCI was conducted by Electroencephalography (EEG). By linking EEG with other neuro imaging technologies like functional Near Infrared Spectroscopy (fNIRS), promising outcomes were attained. An important stage of BCI is brain state identification from verified signal properties. Classifying EEG signals for motor imagery (MI) is a common use in the BCI system. Motor imagery includes imagining the movement of certain body parts without executing the physical movement. Deep Artificial Neural Network (DNN) obtained unprecedented complex classification outcomes. Such performances were obtained by an effective learning algorithm, improved computation power, restricted or back-fed neuron connection, and valuable activation function. Therefore, this study develops a Gazelle Optimization Algorithm with Deep Learning based Motor-Imagery Classification (GOADL-MIC) technique for EEG-Based BCI. The GOADL-MIC technique aims to exploit hyperparameter-tuned DL model for the recognition and identification of MI signals. To achieve this, the GOADL-MIC model initially undergoes the conversion of one dimensional-EEG signals into 2D time-frequency amplitude one. Besides, the EfficientNet-B3 system is applied for the effectual derivation of feature vector and its hyperparameters can be selected by using GOA. Finally, the classification of MIs takes place using bi-directional long short-term memory (Bi-LSTM). The experimentation result analysis of the GOADL-MIC method is verified utilizing the BCI dataset and the results demonstrate the promising results of the GOADL-MIC algorithm over its counter techniques
Read MoreDoi: https://doi.org/10.54216/FPA.160103
Vol. 16 Issue. 1 PP. 37-51, (2024)
In this work, a statistical model is constructed to forecast the possibility of lung nodules that may grow in the future. This study segments all potential lung nodule candidates using the Multi-scale 3D UNet (M-3D-UNet) method. 34 patients' CT scan series yielded an average of approximately 600 nodule candidates larger than 3 mm, which were then segmented. After removing the arteries, non-nodules and 3D shape variation analysis, 34 actual nodules remained. On actual nodules, the nodule growth Rate (NGR) was calculated in terms of 3D-volume change. Three of the 34 actual nodules had RNG values greater than one, indicating that they were malignant. Compactness, Tissue deficit, Tissue excess, Isotropic Factor and Edge gradient were used to develop the nodule growth predictive measure.
Read MoreDoi: https://doi.org/10.54216/FPA.160104
Vol. 16 Issue. 1 PP. 52-66, (2024)
Smart grids, pivotal in modern energy distribution, confront a mounting cybersecurity threat landscape due to their increased connectivity. This study introduces a novel hybrid deep learning approach designed for robust intrusion detection, addressing the imperative to fortify the security of these critical infrastructures. Renamed as "Intrusion Detection for Smart Grid Using a Hybrid Deep Learning Approach," the study amalgamates Conv1D for spatial feature extraction, MaxPooling1D for dimensionality reduction, and GRU for modeling temporal dependencies. The research leverages the Edge-IIoTset Cyber Security Dataset, encompassing diverse layers of emerging technologies within smart grids and facilitating a nuanced understanding of intrusion patterns. Over 10 types of IoT devices and 14 attack categories contribute to the dataset's richness, enhancing the model's training and evaluation. The proposed hybrid model's architecture is detailed, emphasizing the synergy of convolutional and recurrent neural networks in addressing complex intrusion scenarios. This research not only contributes to the evolving field of intrusion detection in smart grids but also sets the stage for creating adaptive security systems. The convergence of a hybrid deep learning approach with a comprehensive cyber security dataset marks a significant stride towards fortifying smart grids against evolving cybersecurity threats. The proposed model achieves 98.20 percentage.
Read MoreDoi: https://doi.org/10.54216/FPA.160105
Vol. 16 Issue. 1 PP. 67-76, (2024)
This article explores the application of the linguistic 2-tuple computational model in decision-making processes, focusing on its efficiency in managing ambiguous and imprecise linguistic information, which is vital in complex decision-making environments. The main objective is to demonstrate the use of the Weighted Power Mean (WPM) operator for hierarchical aggregation, highlighting its adaptability in reflecting the priority structures of specific problems and preserving the integrity of expert opinions. The model enhances user interaction by minimizing the need for complex numerical conversions, facilitating more intuitive decision-making. The study introduces the methodology of the linguistic 2-tuples, emphasizing their practical application in various decision-making contexts through detailed case studies. It elaborates on the hierarchical aggregation model, discussing the flexibility and potential of the WPM operator to adjust the influence of individual criteria based on their importance. The article also examines potential improvements in aggregation operators to increase their effectiveness and applicability across different scenarios. This comprehensive analysis not only underscores the capabilities of linguistic computational models in modern decision-making environments but also proposes future directions for advancing these techniques to handle increasingly complex information landscapes.
Read MoreDoi: https://doi.org/10.54216/FPA.160106
Vol. 16 Issue. 1 PP. 67-84, (2024)
IoT devices have transformed smart cities and healthcare. The expanding usage of IoT devices creates major security threats, leaving critical systems vulnerable to sophisticated and persistent assaults. Our hybrid IoT security approach employs homomorphic encryption and improved MobileNet to protect data and simplify feature extraction. Our extensive testing and assessment prove that the proposed structure makes IoT settings more resistant to sophisticated persistent attacks. We discovered superior methodologies for F1 score, accuracy, precision, and memory performance measurement. To ensure data privacy and security during analysis and transmission, homomorphic encryption is incorporated. Our ablation research lays out each framework component's contributions. To increase system speed, it emphasizes safe data processing, real-time analytical optimization, lightweight feature extraction, and privacy-preserving computing. The scalability study indicates that the framework can scale with IoT installations while maintaining peak performance and resource efficiency. Finally, the hybrid IoT security architecture improves IoT security. It provides a full and effective security solution for IoT infrastructure. Lawmakers, business experts, and students in the sector may learn from this research regarding genuine IoT security systems.
Read MoreDoi: https://doi.org/10.54216/FPA.160107
Vol. 16 Issue. 1 PP. 85-100, (2024)
As IoT devices increase, accuracy and data security become increasingly crucial. This research recommends a powerful threat detection system that accelerates message responses to improve IoT security. The recommended strategy finds dangers in using many data sources. Our deep learning system is DenseNet. It groups photographs nicely. We show how the approach works using real-world experiments. It has few false positives and negatives and is effective at recognizing items. Through ablation research, we examine how design and component selections impact technique performance. This clarifies the method's fundamentals. The research reveals that feature selection, fusion, and DenseNet design improve the technique. We discuss the need for fine-tuning hyperparameters to improve approaches and monitor more individuals. The strategy makes IoT communities safer and more robust by laying the groundwork for threat detection and response. This approach solves message transmission delay concerns, making the IoT safer. These discoveries may benefit hacking specialists. They improve and speed up IoT security.
Read MoreDoi: https://doi.org/10.54216/FPA.160108
Vol. 16 Issue. 1 PP. 101-117, (2024)
The process of identifying and recognising the criminal is the time consuming and difficult task. There are several ways to identify culprits at the crime site, including fingerprinting, DNA matching, and eyewitness testimony. The criminal face identification system will be built on a existing criminal database. The method for identifying a human face using features extrapolated from an image is presented in this study. The technique for identifying a human face using characteristics extrapolated from a picture is presented in this research. It is quite difficult to develop a computer model for recognizing the human face since it is a complicated multidimensional visual representation. The video captured by the camera will be translated into frames as part of the suggested process. To increase detection accuracy, this suggested a Binary Gradient Alignment (BGA) algorithm a description texture classification technique. When a facial feature is detected in an image frame, it undergoes pre-processing to eliminate unnecessary data and reduce unwanted distortions. The real- time processed image is compared to the trained images that have previously been saved in the database. The technology will send an automatic email notice to the police officials if the surveillance camera detects a criminal.
Read MoreDoi: https://doi.org/10.54216/FPA.160109
Vol. 16 Issue. 1 PP. 118-132, (2024)
Pneumonia is a medical condition affecting 100 million people globally, and rates are predicted to reach epidemic levels within the next several decades. As a result of the air sacs in both or even one lung becoming inflamed, the patient may experience fever, chills, and trouble breathing. Coughs with pus may also occur. Various organisms can cause pneumonia, including bacteria, viruses, and fungi. Early detection of pneumonia can allow the severity of the purulent material to be reduced. The ability of computer-aided detection techniques to reliably diagnose pneumonia has made them popular among scientists. We used a pre-trained Inception V3Net, Squeeze Excitation-based deep Convolutional Neural Network (SE-CNN) that was trained on the Kermany dataset and the RSNA Pneumonia Detection Challenge dataset in this study. In early-stage detection, the suggested technique beat previous state-of-the-art networks, achieving 91% precision in severity rating. Furthermore, our network's accuracy, recall, f1-score, as well as quadratic weighted kappa were reported to be 91.56%, 91%, and 90%, respectively. In terms of processing time and space, our suggested framework is simple, precise, and effective.
Read MoreDoi: https://doi.org/10.54216/FPA.160110
Vol. 16 Issue. 1 PP. 133-151, (2024)
The Internet of Medical Things (IoMT) has paved the way for innovative approaches to collecting and managing medical data. With the large and sensitive medical data being processed hence, the need for a strong identity and privacy become necessary. The present paper suggests a comprehensive method of PriMedGuard which aims at protection of the personal medical information. The first stage will be data collection from devices and sensors, then data cleaning to transform the data into the required format. There is also a safety system in the system that registers and authenticates authorized entities as well as ETDO (Enhanced Tasmanian Devil Optimization algorithm) is used for generating asymmetric cryptographic keys. The data is encrypted using the Secure Bit-Count Transmutation (SBCT) Data Encryption Algorithm and then put in the locations provided by the InterPlanetary File System (IPFS), a decentralized and distributed storage system. A safe smart contract on the blockchain is created so that the data retrieval is secure and MedSecEnsemble Detection is proposed as an intrusion detection technique in the IoMT network. By using this method, data will stay available while at the same time integrity, confidentiality and protection against vulnerabilities are ensured. Hence, the Internet of Medical Things ecosystem will be secured from unauthorized access and possible security threats…
Read MoreDoi: https://doi.org/10.54216/FPA.160111
Vol. 16 Issue. 1 PP. 152-170, (2024)
Intrusion detection in the IoMT (Internet of Medical Things) represents the process of keeping track of and discovering unauthorized or malicious actions in medical devices and networks. Some of its benefits include early detection of potential threats, prevention of data breaches, and protection of patient privacy. Aside from these benefits, some difficulties are evident, like alarm fatigue due to false positives, the complexity in the standardizing detection across different devices, and resource limits that hinder qualitative implementations, thus leaving some vulnerabilities in the healthcare infrastructure. This paper proposes a new Efficient Intrusion Detection model based on the Correlation-Based Feature Selection and the OptCNN-LSTM model to address these problems. The proposed methodology comprises five key phases: (i) Data Acquisition (ii) Pre-processing (iii) Feature Extraction (iv) Feature Selection (v) OptCNN-LSTM Model-based intrusion detection. The raw data is first gathered and then preprocessed using z-score normalization and data cleaning. Then, the best features are extracted using central tendency, the degree of dispersion, and correlation. A mixed IHHO-PSO feature with the Correlation-based Feature Selection (CFS) framework is employed to choose the best features amongst the collected features. At last, the OptCNN-LSTM model is performed to detect the intrusion in the IoMT based on features-selected data. The CNN is tuned using the Levy Flight Optimization (LF) which can be further combined with the LSTM to get the expected results. The code is written in Python and the model is then run to determine its performance which is measured in terms of accuracy, precision, f-measure, and a Receiver Operating Characteristic Curve (ROC). Compared to the current models, the proposed model has the highest accuracies 97.6% and 96.5% for learning rates 70 and 80, respectively…
Read MoreDoi: https://doi.org/10.54216/FPA.160112
Vol. 16 Issue. 1 PP. , (2024)
The present study aimed to identify the effect of using a digital storytelling-based electronic program to develop reading comprehension skills of Pupils with Learning Difficulties. The study adopted a quasi-experimental pre-post design with two groups. Each group consisted of 30 pupils. The experimental group was taught through using the digital storytelling-based electronic program, whereas the control one was taught through the traditional method. The researchers prepared a reading comprehension skills test as an instrument to collect data. The results showed that the pupils of the experimental group achieved better results than those of the control one. The results revealed the effectiveness of the Digital Storytelling-Based Electronic Program in developing primary-fifth pupils’ reading comprehension skills.
Read MoreDoi: https://doi.org/10.54216/FPA.160113
Vol. 16 Issue. 1 PP. 195-208, (2024)
The education landscape is shifting towards automation and digitalization to cater to the increasing demand for personalized learning experiences and more efficient teaching methods. In response to this trend, we propose the development of an integrated educational automation fusion platform that aims to overhaul educating and learning practices across various educational sectors. The integration of cutting-edge language models like GPT-3.5 and Gemini Pro in information retrieval and conversational AI has opened fresher opportunities, even within the realm of education. With its advanced features, LangChain, a powerful framework for large language models, enables seamless integration of AI-driven functionalities, including document analysis, question generation, and chatbot interaction, revolutionizing the educational landscape. Also, by harnessing the vast resources of the OpenAI API, our platform empowers educators and learners to engage in dynamic conversations with educational materials, generate personalized assessments, and gain deeper insights from complex datasets within a single forum. This single platform disseminates information on all facets of research and development in educational domain on the grounds of fusion practices and applications. This system is successful in combining multiple models for intelligent systems. On evaluation, our system was successful in generating decent performance compared to existing systems, even though they are singular modules. Overall, our platform aims to empower educators, students, and institutions to embrace the digital era of learning and unlock new avenues for fusion-based knowledge acquisition and innovation.
Read MoreDoi: https://doi.org/10.54216/FPA.160114
Vol. 16 Issue. 1 PP. 209-222, (2024)
The difficulty of automatically modifying and updating operations within Deep Learning (DL) frameworks can slow down the performance of Deep Neural Network processing (DNNs). This research presents a novel approach to software optimization by leveraging dynamically collected profile data. A unique online auto-tuning system for DNNs was developed to enhance both the training and inference phases. Python Distributed Training of Neural Networks (PyDTNN) is a lightweight toolkit designed for distributed DNN training and estimation. It is utilized to evaluate the VGG19 model on two distinct multi-core architecture options. In testing, our auto-tuning system performs comparably, if not better, than a static selection strategy. The performance of each variation of PyDTNN that employs static selection remains consistently high throughout execution. Conversely, the auto-tuned version initially performs at a set level and progressively improves as more feasible choices become available. While both variations yield similar results in training, the selection strategy outperforms all other inference options by autonomously determining the best strategy for each layer in VGG19. The new online implementation selection tool assists in choosing the best performance option from numerous alternatives while the program is running. Its key features include constructing layered judgments and thoroughly examining 35 possibilities. Our advanced systems represent the optimal choice for monitoring sustainable environmental systems with maximum effectiveness, efficiency, and timeliness.
Read MoreDoi: https://doi.org/10.54216/FPA.160115
Vol. 16 Issue. 1 PP. 223-232, (2024)
Wireless sensor network (WSN) was utilized widely in numerous areas owing to their accessibility in data collection, processing, and transmission, and the strength and reliability of data processing and transmission are based on the accuracy of the positions of sensor nodes (SNs) in the WSN. Sink node location estimation in WSN is a vital task intended to define the geographical position of the sink node in the network area of coverage. This procedure normally includes using numerous localization techniques that trust data like received signal strength, arrival time, time variance of arrival, or angle of arrival from adjacent SNs. The accuracy of sink node localization directly influences the efficiency of data aggregation, routing procedures, and complete performance of the network in tasks like environmental monitoring, target tracking, and event recognition. As WSNs are frequently used in remote environments where physical involvement is unusable, an effective and accurate sink node localization model plays a vital part in certifying the network's longevity and reliability. This study develops an Efficient Sink Node Position Estimation using the Harris Hawks Optimization (SNPE-HHO) Algorithm in WSN. The main intention of the SNPE-HHO technique is to recognize the optimal position of the sink node in the network. To achieve this, the SNPE-HHO technique employs the HHO system which gets inspiration from the hunting tactics of Harris Hawk. Moreover, the SNPE-HHO technique computes a fitness function that can drive the searching direction of the HHO algorithm and enhance the node estimation performance. The performance analysis of the SNPE-HHO method is verified by utilizing distinct metrics. The experimentation values confirmed the improved estimation performance of the SNPE-HHO technique over other existing methods
Read MoreDoi: https://doi.org/10.54216/FPA.160116
Vol. 16 Issue. 1 PP. 233-243, (2024)
The COVID-19-induced state of emergency in Ecuador necessitated compulsory isolation for most of the people. During this period, there was a rise in the utilization of technical equipment as individuals had to perform their tasks remotely from their homes. This study sought to assess the utilization of technology resources during the period of quarantine, necessitating the creation of a survey. Specific indicators were considered and standardized for processing. The data processing techniques employed were the Hierarchical Analytical Process and Logic Scoring of Preference. The key findings indicate that the indicators "Modes of Use," "Use Preferences," "Daily Usage Frequency," and "Monthly Expenditure" are crucial for measuring the composite indicator "use of technological tools." A survey was created to contribute to the research.
Read MoreDoi: https://doi.org/10.54216/FPA.160117
Vol. 16 Issue. 1 PP. 244-252, (2024)
Agricultural systems, such as greenhouses, can be used to control environmental factors, such as temperature and humidity, to increase output by employing traditional automation techniques. The advancement of science has resulted in the utilization of mathematical models to understand the behavior of data by analyzing its variability. The objective of this project is to validate a method for controlling temperature and humidity in controlled experimental environments using artificial intelligence and Neutrosophy. The transfer functions obtained from temperature and humidity readings gathered via a SCADA system are utilized. Neutrosophic numbers are used to adjust the temperature and humidity values based on the experimental conditions of the greenhouse, indicating the optimal, important, and sensitive ranges. The control system being investigated employs NARMA-L2 neural networks that belong to the multilayer perception category. This facilitates efficient system administration and showcases outstanding performance in simulations conducted across several temperature and humidity scenarios. The observed errors consistently remain below 5% and any instances of exceeding this threshold are insignificant.
Read MoreDoi: https://doi.org/10.54216/FPA.160118
Vol. 16 Issue. 1 PP. 253-263, (2024)
Combines diesel fuel with cheese to enhance engine efficiency and mitigate detrimental pollutants. Analyzed using a meticulous approach derived from the ISO 8178 standard, combinations containing different ratios of cheese are investigated. The aim of the research is to conduct a multivariate analysis that provides insights into the rheology of diesel and kerosene mixes, thereby enhancing our understanding of the fuel's properties and performance. The researchers conducted experimental trials utilizing diesel blends with varying proportions of cheese, including 5%, 10%, 15%, 20%, 25%, and 30%. A descriptive and multivariate analysis was conducted to measure parameters such as opacity, NOx, CO, HC emissions, and fuel efficiency under different load circumstances. The study identified key elements that determine gasoline characteristics and emissions, including density, viscosity, calorific value, and sulfur content. It emphasized that the addition of cheese had a significant impact on these crucial factors. Two separate categories were created based on the composition of fuel. Blends containing a lower amount of cheesesine (up to 20%) formed a cluster that exhibited an ideal equilibrium in terms of both performance and emissions. The groupings of factors are interconnected, with substantial correlations shown between the physical qualities of the fuel and emissions. This highlights the direct impact of the fuel composition on the engine's environmental performance.
Read MoreDoi: https://doi.org/10.54216/FPA.160119
Vol. 16 Issue. 1 PP. 264-274, (2024)