Fusion: Practice and Applications

Journal DOI

https://doi.org/10.54216/FPA

Submit Your Paper

2692-4048ISSN (Online) 2770-0070ISSN (Print)

Fusion Data Management and Modeling Techniques in Power Quality Compensation Using SAPF

Jessica N. Castillo , Guido G. Carrillo , Luigi O. Freire , Javier Culqui

The development of transportation today encompasses a broad range of technological applications that occasionally present new challenges arising from difficulties that require solutions. The article analyzes the difficulties in electric trains concerning the compensation of electric power quality in a traction system using a parallel active power filter (SAPF). From the literature review of several studies, the test distribution system in a distribution network for an electric train system is analyzed, with a variable load and harmonic content. The estimation and control technique used in the SAPF to compensate for the harmonic content and reduce the reactive power at the output of a traction substation is described. A data fusion management strategy is employed in the analyses, demonstrating the system's effectiveness.

Read More

Doi: https://doi.org/10.54216/FPA.160201

Vol. 16 Issue. 2 PP. 08-21, (2024)

Integrating Machine Learning Models for Enhanced Soil Organic Carbon Estimation: A Multi-Model Fusion Approach

Bryan Barragán-Pazmiño , Angel Ordóñez Echeverría , Magdy Echeverría Guadalupe , Theofilos Toulkeridis

Machine learning approaches are utilized to identify patterns in behavior and generate predictions across various applications. The objective of this work is to create a highly efficient model for accurately measuring and analyzing the levels of soil organic carbon (SOC) in the Chambo river sub-basin, which is situated in the province of Chimborazo. The model evaluation entails the application of diverse machine learning algorithms and approaches to determine the most efficient regression model. Regression models are improved using techniques such as Artificial Neural Networks, Support Vector Machines, and Decision Trees. The Resilient Backpropagation method yields the most precise model, as it accounts for a greater proportion of the variability in SOC content for the test data. This aligns with the findings from the training data, demonstrating a relatively low mean absolute error and a processing time that is approximately 400 times faster than that of the Multilayer Perceptron algorithm. The evaluation of estimating models is an objective procedure that considers not only the findings and precise metrics derived from the model's design, but also other relevant elements. The effectiveness of the Random Forest approach, specifically the quantile regression forests technique, has been established for estimating SOC contents in the Chambo river sub-basin data.

Read More

Doi: https://doi.org/10.54216/FPA.160202

Vol. 16 Issue. 2 PP. 22-31, (2024)

Fusion of Forensic Analysis of Mobile Devices: Integrating Multi-Criteria Decision Methods and Case Study Insights

Jorge B. Rubio Peñaherrera , Kevin Mauricio T. Diaz , Adam Marks

This study employed a Multi-Criteria Decision Analysis (MCDM) approach, utilizing the DEMATEL and TOPSIS methodologies, to assess the effectiveness of forensic tools designed for mobile devices, with a specific emphasis on Android and iOS platforms. The investigation evaluated technologies used for collecting, retrieving, and validating data in the Cyber Forensic Field Triage paradigm, with a focus on rapidly identifying and interpreting digital evidence. The study incorporated several factors and expert preferences, concluding that the Android Triage and Andriller tools were the most efficient.

Read More

Doi: https://doi.org/10.54216/FPA.160203

Vol. 16 Issue. 2 PP. 32-42, (2024)

SOM and Hybrid Filtering: Pioneering Next-Gen Movie Recommendations in the Entertainment Industry

Saurabh Sharma , Ghanshyam Prasad Dubey , Harish Kumar Shakya , Aditi Sharma

In an age where digital connectivity is increasingly shaping entertainment content, personalized movie recommendations play a pivotal role in enhancing user satisfaction and engagement. This research introduces an innovative approach utilizing Enhanced Self-Organizing Maps (SOM) to streamline movie selection processes. Self-Organizing Maps (SOMs), a type of unsupervised neural network architecture, are particularly adept at discerning intricate data patterns, making them valuable assets in recommendation systems. The methodology outlined in this paper commences with gathering user-movie interaction data, including user feedback and movie characteristics, which is standardized to ensure consistency before model training. Leveraging its adaptable learning rate and neighborhood function, the Enhanced SOM effectively identifies subtle data nuances. Personalized movie suggestions are then generated by exploiting the Enhanced SOM's capacity to identify similar users and films. Integration of hybrid filtering techniques enriches recommendation quality, blending collaborative filtering algorithms, which leverage user-item interactions, with content-based filtering, which utilizes movie attributes such as genres and descriptions. This amalgamation results in suggestions that harmoniously combine diverse filtering methodologies. The proposed solution's efficacy is rigorously evaluated by comparing suggestion accuracy and user satisfaction against predefined benchmarks. Extensive real-world dataset testing corroborates the effectiveness of the Enhanced SOM-based movie recommendation approach. Furthermore, the system offers flexibility through options for parameter adjustment, grid size variations, and neighborhood function modifications to further refine recommendation accuracy. Collectively, these elements underscore the efficacy of the proposed method in furnishing tailored movie recommendations. When coupled with hybrid filtering techniques, the implementation of Enhanced SOMs emerges as a reliable model for content platforms seeking to enhance user experiences by delivering precise movie recommendations, coupled with scalability and adaptability.

Read More

Doi: https://doi.org/10.54216/FPA.160204

Vol. 16 Issue. 2 PP. 43-62, (2024)

Employing Deep Learning Techniques for the Identification and Assessment of Skin Cancer

Sowmya Koneru , Pappula Madhavi , Krishna Kishore Thota , Janjhyam V. Naga Ramesh , Venkata Nagaraju Thatha , S. Phani Praveen

These days, skin cancer is a prominent cause of death for people. Skin cancer is the name given to the abnormal development of skin cells that are exposed to the sun. These skin cells can develop anywhere on the human body. The majority of malignancies are treatable in the early stages. Thus, early detection of skin cancer is anticipated in order to preserve patient life. With cutting edge innovation, it is possible to detect skin cancer early on. Here, we provide a novel framework for the recognition of dermo duplication pictures that makes use of a neighbouring descriptor encoding method and deep learning technique. Specifically, the deep representations of a rescaled dermo duplication image that were initially removed through training an extraordinarily deep residual neural network on a big dataset of normal images. Subsequently, the neighbourhood deep descriptors are obtained by request-less visual measurement highlights, which rely on fisher vector encoding to create an international image representation. Lastly, a convolution neural network (CNN) was utilised to orchestrate melanoma images employing the Fisher vector encoded depictions. This proposed technique can give more discriminative parts to oversee huge contrasts inside melanoma classes and little varieties among melanoma and non-melanoma classes with least readiness information.

Read More

Doi: https://doi.org/10.54216/FPA.160205

Vol. 16 Issue. 2 PP. 63-85, (2024)

Classification of Monkeypox Using Greylag Goose Optimization (GGO) Algorithm

Ahmed Eslam , Mohamed G. Abdelfattah , El-Sayed M. El-Kenawy , Hossam El-Din Moustafa

After the COVID-19 epidemic, public health awareness increased. A skin viral disease known as monkeypox sparked an emergency alert, leading to numerous reports of infections across numerous European countries. Common symptoms of this disease are fever, high temperatures, and water-filled blisters. This paper presents one of the recent algorithms based on a metaheuristic framework. To improve the performance of monkeypox classification, we introduce the GGO algorithm. Firstly, we employ four pre-trained models (AlexNet, GoogleNet, Resnet-50, and VGG-19) to extract the most common features of monkeypox skin image disease (MSID). Then, we reduce the number of extracted features to select the most distinguishing features for the disease. We make it by using GGO in binary form, which has an average fitness of 0.60068 and a best fitness of 0.50248. Lastly, we apply various optimization algorithms, including the (WWPA) waterwheel plant algorithm, the (DTO) Boosted Dipper Throated Optimization, the (PSO) particle swarm optimizer, the (WAO) whale optimization algorithm, the (GWO) gray wolf optimizer, the (FA) firefly algorithm, and the GGO algorithm, all based on the Convolution Neural Network (CNN), to achieve the best performance. Best Performance is indicated in accuracy and sensitivity; it reached 0.9919 and 0.9895 by GGO. A rigorous statistical analysis test was applied to confirm the validity of our findings. We applied Analysis of Variance ANOVA, and Wilcoxon signed tests, and the results indicated that the value of p was less than 0.005, which strongly supports our hypothesis.  

Read More

Doi: https://doi.org/10.54216/FPA.160206

Vol. 16 Issue. 2 PP. 86-107, (2024)

A New Method for Intelligent Multimedia Compression Based on Discrete Hartley Matrix

Noor Mezher Sahab , Qusay Abboodi Ali

Multimedia data (video, audio, images) require storage space and transmission bandwidth when sent through social media networking. Despite rapid advances in the capabilities of digital communication systems, the high data size and data transfer bandwidth continue to exceed the capabilities of available technology, especially among social media users. The recent growth of multimedia-based web applications such as WhatsApp, Telegram, and Messenger has created a need for more efficient ways to compress media data. This is because the transmission speed of networks for multimedia data is relatively slow. In addition, there is a specific size for sending files via email or social networks, because much high-definition multimedia information can reach the Giga Byte size. Moreover, most smart cameras have high imaging resolution, which increases the bit rate of multimedia files of video, audio, and image.  Therefore, the goal of data compression is to represent media (video, audio, images, etc.) as accurately as possible with the minimum number of bits (bit rate). Traditional data compression methods are complex for users. They require a high processing power for media data. This shows that most of the existing algorithms have loss in data during the process of compressing and decompressing data, with a high bitrate for media data (video, audio, and image). Therefore, this work describes a new method for media compression systems by discrete Hartley matrix (128) to get a high speed and low bit rate for compressing multimedia data. Finally, the results show that the proposed algorithm has a high-performance speed with a low bit rate for compression data, without losing any part of data (video, sound, and image). Furthermore, the majority of users of social media are satisfied with the data compression interactive system, with high performance and effectiveness in compressing multimedia data. This, in turn, will make it easier for users to easily send their files of video, audio, and images via social media networks.

Read More

Doi: https://doi.org/10.54216/FPA.160207

Vol. 16 Issue. 2 PP. 108-117, (2024)

Harnessing Artificial Intelligence for Enhanced Efficiency in Academic Writing and Research

Alaa A. Qaffas

In the recent past, there has been a surge in the use of artificial intelligence (AI) in the development of smart technologies for the purpose of improving efficiency in writing academic papers and conducting researches. However, the potential of using AI in the improvement of scholarly processes has not been optimally realized due to low awareness and visibility of the tool among the users. In this respect, this paper aims to describe the following tools of AI which can be applied in the research process including literature search and manuscript preparation. To assess the AI technology, the current literature in form of case studies was reviewed and this included the automated literature search engines, citation management software, natural language processing tools and data analysis tools. It also reveals that AI approaches can also help in decreasing the amount of time spent in article and data search, citation, citation management, and even in the generation of quality publications. This essay also examines the ethical issues of using artificial intelligence in research and any bias that may be present. In conclusion, it is necessary to underline that AI can be useful in improving the results of learning processes. But it is crucial that the researchers are trained well and are put in a position to doubt the outcome produced by the AI. Thus, the purpose of the paper is to discuss how AI is being used in academia at the moment and what could be done to expand its use in the future.

Read More

Doi: https://doi.org/10.54216/FPA.160209

Vol. 16 Issue. 2 PP. 126-146, (2024)

Speaker Identification in Crowd Speech Audio using Convolutional Neural Networks

Ghadeer Qasim Ali , Husam Ali Abdulmohsin

Crowd speaker identification is the most advanced technology in the field of audio identification and personal user experience which researchers have extensively focused on, but still, science hasn’t been able to achieve high results in crowed identification. This work aims to design and implement a novel crowd speech identification method that can identify speakers in a multi speaker environment, (two, three, four and five speakers). This work will be implemented through two phases. The training phase is the Convolutional Neural Network (CNN) training and testing phase. Through this phase, the training will be implemented on data generated via the Combinatorial Cartesian Product approach. This approach uses two primary processes, the Computation of the Cartesian product process and combinatorial selection process. The second phase is the prediction phase. The aim of this phase is to check the CNN trained in the first phase, through testing it on new crowed audios than the data that the CNN was trained on in the first phase, these new crowded audios exist in the Ghadeer-Speech-Crowd-Corpus (GSCC) dataset, which is a new database designed through this work. Compared to the state-of-the-art speaker identification in multi speaker environment approaches, the results are impressive, with a recognition rate of 99.5% for audio with three speakers, 98.5% for music with four speakers, and 96.4% for audio with five speakers.

Read More

Doi: https://doi.org/10.54216/FPA.160208

Vol. 16 Issue. 2 PP. 118-125, (2024)

Enhancing Tomato Leaf Disease Detection through Generative Adversarial Networks and Genetic Algorithm based Convolutional Neural Network

Vasima Khan , Seema Sharma , Janjhyam Venkata Naga Ramesh , Piyush Kumar Pareek , Prashant Kumar Shukla , Shraddha V. Pandit

In the agricultural sector, tomato leaf diseases signify a lot because they result in a lower crop yield and quality. Timely detection and classification of diseases help to ensure early interventions and effective treatment solutions. Nonetheless, the existing methods are confined by the dataset imbalance which affects class distribution negatively and thus results in poor models, especially for rare diseases. The research is designed to improve the capability of tomato leaf disease identification by investing a new deep-learning method beyond the challenge of imbalanced class distribution. By balancing the dataset, we aim to improve classification accuracy as we pay more attention to the under-represented classes. The proposed GAN-based method that combines the Weighted Loss Function to produce tomato leaf disease synthetic images is underrepresented. They improve the quality of the entire dataset, and the images from every class are now in a more balanced proportion. A CNN, which is the convolutional neural network, is trained for the classifier, with the weighted loss function as a part of the model. We used Genetic Algorithm (GA) for hyperparameter optimization of the CNN. It helps in emphasizing the learning process from the under-represented class. The suggested one will not only decrease the accuracy of tomato leaf disease detection but also increase it. Therefore, the synthetic images created by GAN enhance the dataset since the class distribution is brought to equilibrium. The incorporation of the weighted loss function into the model’s training process makes it very effective in handling with the class instability problem and consequently, the model can identify both common and rare diseases. From the outcomes of this study, it can be concluded that it is feasible to employ GAN and one loser weights function to solve the problem of class imbalance in tomato leaf disease recognition. A suggested approach that increases the model’s accuracy and reliability could be a good move to enhancing a reliable method of disease detection in the agricultural sector.

Read More

Doi: https://doi.org/10.54216/FPA.160210

Vol. 16 Issue. 2 PP. 147-177, (2024)

Enhancing Stock Market Trend Prediction Using Explainable Artificial Intelligence and Multi-source Data

John Ranjith , Kumar Chandar S

Determining the trend of the stock market is a complex task influenced by numerous factors like fundamental variables, company performance, investor behavior, sentiments expressed in social media, etc. Although machine learning models support predicting stock market trends using historical or social media data, reliance on a single data source poses a serious challenge. This study introduces a novel Explainable artificial intelligence (XAI) to address a binary classification problem wherein the objective is to predict the trend of the stock market, utilizing an integration of multiple data sources. The dataset includes trading data, news and Twitter sentiment, and technical indicators. Sentiment analysis and the Natural Language Toolkit are utilized to extract the qualitative information from social media data. Technical indicators, or quantitative characteristics, are therefore generated from trade data. The technical indicators are fused with the stock sentiment features to predict the future stock market trend. Finally, a machine learning model is employed for upward or downward stock trend predictions. The proposed model in this study incorporates XAI to interpret the results. The presented model is evaluated using five bank stocks, and the results are promising, outperforming other models by reporting a mean accuracy of 90.14%. Additionally, the proposed model is explainable, exposing the rationale behind the classifier and furnishing a complete set of interpretations for the attained outcomes.

Read More

Doi: https://doi.org/10.54216/FPA.160211

Vol. 16 Issue. 2 PP. 178-189, (2024)

Fusion of Artificial Intelligence Based Deep Learning Model for Product Reviews on E-Commerce Environment

Nasser Nammas Albogami

The emergence of e-commerce is introduced in the golden era. E-commerce product reviews are comments generated by customers of online shopping to estimate the service and product qualities having purchased; these remarks aid users in identifying the facts of the product. The sentiment polarity of e-commerce product analyses is the optimal method to get consumer opinions on a service or product. Hence, sentiment analysis (SA) of product remarks on e-commerce platforms is much more influential.  Deep learning (DL) analysis of online consumer feedback can identify user behavior toward a sustainable future. Artificial intelligence (AI) can acquire perceptions from product evaluations to develop efficient products. The main challenge is that numerous ethical products do not satisfy customers’ expectations owing to the gap among users’ expectations and their perception of sustainable products. This paper focuses on the design of the Fusion of Artificial Intelligence Deep Learning Model for Product Reviews on E-Commerce (FAIDLM-PREC) model. The main intention of FAIDLM-PREC method is to appropriately distinguish the dissimilar types of sentiments that occur in the e-commerce reviews.  Initially, data preprocessing is executed to increase the product review quality with Glove based word embedding method. For product reviews classification, the FAIDLM-PREC approach evolves fusion of dual DL methods namely Bidirectional Long Short‐Term Memory (Bi-LSTM) and gated recurrent unit (GRU) methods. Eventually, the parameters relevant to the two DL methods are perfectly modified utilizing the Archimedes optimization algorithm (AOA). An extensive experiment of the FAIDLM-PREC technique was conducted utilizing customer review database and outcomes indicated that the FAIDLM-PREC technique highlighted betterment over other recent methods to several measures.

Read More

Doi: https://doi.org/10.54216/FPA.160212

Vol. 16 Issue. 2 PP. 190-201, (2024)

A Hybrid Neutrosophic Hierarchical Method with SWOT Analysis to Face Complexity and Uncertainty

Milena Avarez Tapia , Carlos G. Rosero Martínez , Josue R. Lımaıco Mına , Saidkarimova Matlyuba Ishanovna

This paper addresses the question that is global as decision making in the scenario of ambiguity. Given the conflicting or less dependable information, it also becomes necessary to look for approaches that assist us. Conventional strategic planning approaches work relatively well with straightforward and precise information. These become inadequate with situations that are ambiguous. To address this challenge, we adopt the Neutrosophic Hierarchy Method that integrates with SWOT analysis in addressing the challenge. As such, we learn to evaluate or assess the four components of SWOT: Strengths, opportunities, weaknesses and threats in wider terms. However, we do appreciate that often what we assess is not black and white but in shades of color. The conclusion is that for complex decision-making, this approach seems more appropriate and offers better results than others offer. The key aim of this article is to put forth a novel perspective on how decisions should be made in the face of uncertainty. Most of all, we expect to be helpful to both policymakers and strategists in the sense of providing a tool, which can be useful when it comes to the practical inconsistencies that are quite frequently in excess of reasonable solutions.

Read More

Doi: https://doi.org/10.54216/FPA.160213

Vol. 16 Issue. 2 PP. 202-212, (2024)

Neutrosophic Model for Sentiment Data Analysis

Ned Vıto Quevedo Arnaız , Genaro Vınıcıo Jordan Naranjo , Diego Xavier Chamorro Valencia , Joffre Joffre Paladines Rodríguez , Anna Mixaylovna Aripova

Sentiment analysis has recently become popular in social, political and health related fields, but it has a common limitation of capturing the subjectivity involved in multiple human expressions. In this study, we tackle this concern by presenting a model that is constructed using neutrosophic logic which can incorporate indeterminacy in the evaluation of perceptions. Although some answers may be provided by the traditional methods, they fail to contain the uncertainties and contradictions which are characteristic of natural language, making them difficult to implement in complicated situations. In this methodological gap, the neutrosophic model is presented as a tool capable of overcoming these limitations by explicitly treating uncertainty and balancing definite, indeterminate, and contradictory elements. The integration of machine learning algorithms with neutrosophic techniques helps classify and visualize sentiments embedded in big volume of text data. The findings suggest that this methodology not only enhances the precision in the identification of emotional subtleties but also provides a hybrid platform for integrating imprecise information. His credits are based on the development of a theoretical model which advances the field of sentiment analysis and the development of real-life applications in customer services for example, political analytics and strategic decision making. This methodological advance demonstrates that incorporating neutrosophic logic into sentiment data analysis opens up new possibilities for understanding and modeling the complexities of human perceptions.

Read More

Doi: https://doi.org/10.54216/FPA.160214

Vol. 16 Issue. 2 PP. 213-323, (2024)