Fusion: Practice and Applications

Journal DOI

https://doi.org/10.54216/FPA

Submit Your Paper

2692-4048ISSN (Online) 2770-0070ISSN (Print)

Improving Arabic Spam classification in social media using hyperparameters tuning and Particle Swarm Optimization

Amr Mohamed El Koshiry , Entesar H. Ibraheem Eliwa , Ahmed Omar

Online social networks continue to evolve, serving a variety of purposes, such as sharing educational content, chatting, making friends and followers, sharing news, and playing online games. However, the widespread flow of unwanted messages poses significant problems, including reducing online user interaction time, extremist views, reducing the quality of information, especially in the educational field. The use of coordinated automated accounts or robots on social networking sites is a common tactic for spreading unwanted messages, rumors, fake news, and false testimonies for mass communication or targeted users. Since users (especially in the educational field) receive many messages through social media, they often fail to recognize the content of unwanted messages, which may contain harmful links, malicious programs, fake accounts, false reports, and misleading opinions. Therefore, it is vital to regulate and classify disturbing texts to enhance the security of social media. This study focuses on building an Arabic disturbing message dataset extracted from Twitter, which consists of 14,250 tweets. Our proposed methodology includes applying new tag identification technology to collected tweets. Then, we use prevailing machine learning algorithms to build a model for classifying disturbing messages in Arabic, using effective parameter tuning methods to obtain the most suitable parameters for each algorithm. In addition, we use particle swarm optimization to identify the most relevant features to improve the classification performance. The results indicate a clear improvement in the classification performance from 0.9822 to 0.98875, with a 50% reduction in the feature set. Our study focuses on Arabic spam messages, classifying spam messages, tuning effective parameters, and selecting features as key areas of investigation.

Read More

Doi: https://doi.org/10.54216/FPA.160101

Vol. 16 Issue. 1 PP. 08-22, (2024)

Automated Gesture Recognition Using Zebra Optimization Algorithm with Deep Learning Model for Visually Challenged People

Mohammed Basheri

Gesture recognition for visually challenged people plays a vital role in improving their convenience and interaction with digital gadgets and environments. It includes improvement of systems that permit them to relate with digital devices by using hand actions or gestures. To improve user-friendliness, these systems select in-built and effortlessly learnable gestures, often integrating wearable devices prepared with sensors for precise detection. Incorporating auditory or haptic feedback devices offers real-time cues about achievement of familiar gestures. Machine learning (ML) and deep learning (DL) methods are useful tools for accurate gesture detection, with customization choices to accommodate individual preferences. In this view, this article concentrates on design and development of Automated Gesture Recognition using Zebra Optimization Algorithm with Deep Learning (AGR-ZOADL) model for Visually Challenged People. The AGR-ZOADL technique aims to recognize the gestures to aid visually challenged people. In the AGR-ZOADL technique, the primary level of data pre-processing is involved by median filtering (MF). Besides, the AGR-ZOADL technique applies NASNet model to learn complex features from the preprocessed data. To enhance performance of NASNet technique, ZOA based hyperparameter procedure performed. For gesture recognition process, stacked long short term memory (SLSTM) model is applied. The performance validation of AGR-ZOADL technique carried out using a benchmark dataset. The experimental values stated that AGR-ZOADL methodology extents significant performance over other present approaches

Read More

Doi: https://doi.org/10.54216/FPA.160102

Vol. 16 Issue. 1 PP. 23-36, (2024)

EEG-Based Brain-Computer Interfaces Using Gazelle Optimization Algorithm with Deep Learning for Motor-Imagery Classification

P. Radhakrishnan , Abullaıs Nehal Ahmed , K. Kalaiarasi , Koppisetti Giridhar , S. Thenappan

Brain-computer interface (BCI) is a procedure of connecting the central nervous system to the device. In the past few years, BCI was conducted by Electroencephalography (EEG). By linking EEG with other neuro imaging technologies like functional Near Infrared Spectroscopy (fNIRS), promising outcomes were attained. An important stage of BCI is brain state identification from verified signal properties. Classifying EEG signals for motor imagery (MI) is a common use in the BCI system. Motor imagery includes imagining the movement of certain body parts without executing the physical movement. Deep Artificial Neural Network (DNN) obtained unprecedented complex classification outcomes. Such performances were obtained by an effective learning algorithm, improved computation power, restricted or back-fed neuron connection, and valuable activation function. Therefore, this study develops a Gazelle Optimization Algorithm with Deep Learning based Motor-Imagery Classification (GOADL-MIC) technique for EEG-Based BCI. The GOADL-MIC technique aims to exploit hyperparameter-tuned DL model for the recognition and identification of MI signals. To achieve this, the GOADL-MIC model initially undergoes the conversion of one dimensional-EEG signals into 2D time-frequency amplitude one. Besides, the EfficientNet-B3 system is applied for the effectual derivation of feature vector and its hyperparameters can be selected by using GOA. Finally, the classification of MIs takes place using bi-directional long short-term memory (Bi-LSTM). The experimentation result analysis of the GOADL-MIC method is verified utilizing the BCI dataset and the results demonstrate the promising results of the GOADL-MIC algorithm over its counter techniques

Read More

Doi: https://doi.org/10.54216/FPA.160103

Vol. 16 Issue. 1 PP. 37-51, (2024)

Lung nodule growth measurement and prediction using Multi scale - 3 D- UNet segmentation and shape variance analysis

Sathyamoorthy k. , Ravikumar S.

In this work, a statistical model is constructed to forecast the possibility of lung nodules that may grow in the future. This study segments all potential lung nodule candidates using the Multi-scale 3D UNet (M-3D-UNet) method. 34 patients' CT scan series yielded an average of approximately 600 nodule candidates larger than 3 mm, which were then segmented. After removing the arteries, non-nodules and 3D shape variation analysis, 34 actual nodules remained. On actual nodules, the nodule growth Rate (NGR) was calculated in terms of 3D-volume change. Three of the 34 actual nodules had RNG values greater than one, indicating that they were malignant. Compactness, Tissue deficit, Tissue excess, Isotropic Factor and Edge gradient were used to develop the nodule growth predictive measure.

Read More

Doi: https://doi.org/10.54216/FPA.160104

Vol. 16 Issue. 1 PP. 52-66, (2024)

Fusing Deep Learning Techniques for Intrusion Detection in Smart Grids

Rahul R. , Sindhu P. , G. Naveen Sundar , R. Venkatesan

Smart grids, pivotal in modern energy distribution, confront a mounting cybersecurity threat landscape due to their increased connectivity. This study introduces a novel hybrid deep learning approach designed for robust intrusion detection, addressing the imperative to fortify the security of these critical infrastructures. Renamed as "Intrusion Detection for Smart Grid Using a Hybrid Deep Learning Approach," the study amalgamates Conv1D for spatial feature extraction, MaxPooling1D for dimensionality reduction, and GRU for modeling temporal dependencies. The research leverages the Edge-IIoTset Cyber Security Dataset, encompassing diverse layers of emerging technologies within smart grids and facilitating a nuanced understanding of intrusion patterns. Over 10 types of IoT devices and 14 attack categories contribute to the dataset's richness, enhancing the model's training and evaluation. The proposed hybrid model's architecture is detailed, emphasizing the synergy of convolutional and recurrent neural networks in addressing complex intrusion scenarios. This research not only contributes to the evolving field of intrusion detection in smart grids but also sets the stage for creating adaptive security systems. The convergence of a hybrid deep learning approach with a comprehensive cyber security dataset marks a significant stride towards fortifying smart grids against evolving cybersecurity threats. The proposed model achieves 98.20 percentage.

Read More

Doi: https://doi.org/10.54216/FPA.160105

Vol. 16 Issue. 1 PP. 67-76, (2024)